Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for October 2005

Usage-Based Partitioning

leave a comment »

I was reading Dave Wickert’s excellent white paper "Project REAL: Analysis Services Technical Drilldown" the other day (you can get it here), specifically the section on P39 about partitioning. In it he discusses the new functionality in AS2005 which automatically determines which members from your dimensions have data in a given partition, and goes on to talk about the new possibilities this opens up in terms of partitioning strategy. Here’s an excerpt:
 

The partitions in the Project REAL database seem to violate one of the basic best practices of SQL Server 2000. There is no data slice set for the partitions. In SQL Server 2000, partitions must have the data slice set so that the run-time engine knows which partition to access. This is similar to specifying a hint to a relational query optimizer. In SQL Server 2005, this is no longer necessary. Processing the partition now automatically builds a histogram-like structure in the MOLAP storage. This structure identifies which members from all dimensions are included in the partition. Thus, so long as the storage method is MOLAP, the data slice is an optional (and unused) property. However, the data slice is used with ROLAP storage or when proactive caching involves a ROLAP access phase. In both of these circumstances, the actual fact data is never moved so the system does not have a chance to identify a member. In this case, setting the data slice for the partition remains a necessary and critical step if you expect the system to perform well.

Because the MOLAP structures dynamically determine the data slice, a new type of partitioning technique is possible with SQL Server 2005. The best way to describe this technique is via a simple example.

Suppose a system that you are designing has a product dimension of 1,000 products. Of these, the top 5 products account for 80% of the sales (roughly evenly distributed). The remaining 995 products account for the other 20% of the sales. An analysis of the end-user query patterns show that analysis based on product is a common and effective partitioning scheme. For example, most of the reports include a breakdown by product. Based on this analysis, you create six partitions. You create one partition each for the top 5 products and then one “catchall” partition for the remainder. It is easy to create a catchall partition. In the query binding, add a WHERE clause to the SQL statement as in the following code.

In the top five partitions (1 through 5) use the following code.

      SELECT * FROM <fact table>
      WHERE SK_Product_ID = <SK_TopNthProduct#>

In the catchall partition use the following code.

      SELECT * FROM <fact table>
      WHERE SK_Product_ID NOT IN (<SK_TopProduct#>,
                                  <SK_2ndTopProduct#>

                                  <SK_3rdTopProduct#>

                                  <SK_4thTopProduct#>

                                  <SK_5thTopProduct#>)

This technique requires a lot of administrative overhead in SQL Server 2000 Analysis Services. In SQL Server 2000, the data slice must identify each and every member in the partition—even if there are thousands and thousands of members. To implement the example, you would need to create the catchall partition data slice with 995 members in it. This is in addition to the administrative challenge of updating that list as new members are added to the dimension. In SQL Server 2005 Analysis Services, the automatic building of the data slice in the partition eliminates the administrative overhead.

 
 This got me thinking… if we’ve got a Usage-Based Optimisation wizard for helping design the right aggregations for a cube, surely it’s possible to do something similar so that we can design partitions on the basis of the queries that users actually run? Here’s an idea on how it might work (nb this would be a strategy to use in addition to partitioning by Time, Store or other ‘obvious’ slices rather than a replacement):
  • First, get a log of all the queries that users are actually running. Unfortunately the Query Log in AS2005, like AS2000, doesn’t actually record the actual MDX of all the queries run. The only way to do this is to use Profiler; I was a bit worried about whether doing this would have an adverse impact on query performance but when I put the question to MS they indicated it shouldn’t be much (Richard Tkachuk also mentioned, as an aside, that turning off Flight Recorder should result in an increase of a few % in terms of query performance – a tip to remember for production boxes, I think). Once you’ve run your trace you can then export all of the MDX statements from it to a text file very easily.
  • You’d then need a bit of code to extract the unique names of all the members mentioned explicitly in these queries. It should be a fairly simple task if you get the right regular expression, I think. Note that this ignores queries which use any kind of set expression – my thinking was that individually named members are going to be the most interesting because they’re going to be the ones which slice the queries the most: if users are querying on all the countries in the cube that’s not going to be any use for partitioning, but if they have a particular product in the WHERE clause that is much more useful to know about.
  • Then you could do some data mining to cluster these members by their propensity to appear in a query together. The idea is that each of the resulting clusters would translate into a partition; those members which didn’t fall nicely into a cluster and those members that didn’t get extracted in step #2 would have their data fall into one of Dave’s ‘catch-all’ partitions. Imagine this scenario: the UK branch of the Adventure Works corporation suddenly finds there is massive demand for bikes after petrol (‘gas’, for you Americans) prices rise massively. As a result, analysts in the UK run lots of queries which are sliced by the Product Category [Bikes] and the Country [UK]. You’d hope that this pattern would emerge in the clustering and result in a single partition containing all the data for ([Bikes], [UK]), so in the future similar queries run much faster.

What does everyone think? There seems to be a lot of activity these days in the comments section of my blog, so I thought I’d invite feedback. Can anyone see a fatal flaw in this approach?

 

Written by Chris Webb

October 20, 2005 at 2:54 pm

Posted in Random Thoughts

Tableau v1.5 released

with 2 comments

Version 1.5 of Tableau, in my opinion probably the best looking, easiest to use and most innovative (but unfortunately also rather expensive and fat-client only) AS client tool has just been released. You can see a list of all the new features here, chief of which is support for AS2005. If you’re looking for an AS client tool I strongly recommend you download a trial and take a look even if you don’t think it can meet all your requirements – it really shows up how poor the other client tools out there are in user interface terms.
 
I did a tiny bit of beta testing on this release and remained as impressed as I was when I first saw it. However the discovery that you can’t use Time Utility dimensions with the tool – a modelling technique which is going to be very common with AS2005 since that’s what the Time Intelligence Wizard builds to hang all your time calculations such as YTD and Previous Period Growth off – was a bit of a disappointment. I found the dev team very intelligent and responsive to feedback, though, and they’ve promised to look at this problem for the next release…

Written by Chris Webb

October 19, 2005 at 11:52 am

Posted in Client Tools

So, what is the UDM?

with 16 comments

The other week I went to an evening event at Microsoft’s UK office in Reading, given by Matt Stephen. It was a general introduction to BI in SQL2005 and as such, attended by people who didn’t know much at all about the new features in AS, RS, IS and so on. All the familiar Powerpoints were shown and much was made of the Unified Dimensional Model as being the best thing since sliced bread. I’m sure almost everyone reading this has seen these presentations, especially the slides where the relational reporting and olap reporting worlds ‘come together’ like two pieces of a jigsaw and the one where Analysis Services is described as a cache on top of your data warehouse. At the end of the session, though, the very first question that was asked was which I think had been on a lot of people’s minds – "What exactly is the UDM?". This reminded me of the first time I saw any presentations on Yukon AS at an airlift in Redmond two-and-a-half years ago: for a while afterwards I was confused over what exactly the UDM was too. And Myles Matheson in a blog entry from a month or so back feels obliged to answer exactly the same question so I suspect this is a common reaction.
 
The answer of course is actually pretty simple. Put simply, the UDM is just the cube in Analysis Services 2005; because it can now model so many more features of a relational data warehouse (eg many-to-many dimensions, role-playing dimensions) the message is that there’s now no reason to run queries directly against your data warehouse at all because you’ll get much better performance and query-flexibility by building a cube and querying that instead. From a technical point of view I have no problems at all with the claims being made here – in my experience AS2005 lives up to its hype as much as any software product can – but I didn’t understand why all this talk of the UDM and the resulting confusion was necessary. Why not just talk about the new capabilities of cubes in AS2005?
 
Then I came up with the following theory. The UDM doesn’t exist as a feature, really, but is more of a marketing concept. Marketing concepts are meant to help sell a product. For AS2005, who are the new customers that Microsoft is trying to target? Probably the same new customers that AS2000 won over, people who hadn’t been customers of other BI companies but who had either been priced out of the market or were trying to hand-code their own BI solutions using a relational database and encountered the usual problems. They’re going to be easier to sell to than someone who already has a big investment in Cognos, Essbase or Oracle. In my experience there’s a vast amount of people out there who are still in this position, but in contrast to the people who picked up on AS2K they’re by nature a bit more cautious and unwilling to leave their relational comfort-zone – they know about OLAP but they’re not sure they want to learn a new technology. This constituency is, in my opinion, who the whole UDM pitch is aimed at: let’s not talk about cubes, because that might frighten you, but let’s talk about the cube as a cache (which is less threatening) and the UDM as something that is the successor of both relational reporting and OLAP reporting.
 
So this is why I think I was confused: I was meant to be confused. Quite a clever strategy to avoid knee-jerk anti-cube prejudice or fear, then, if it works. But does it work? Well, maybe, maybe not. The fact I was confused doesn’t really matter because I’m cube-friendly anyway, but for the confused relational guy his first reaction to hearing about the UDM is to start asking questions to try to clarify the situation. And what I found interesting at Matt Stephen’s presentation was that the second question asked was exactly the same question that I asked when I was trying to understand what the UDM was: since the UDM is a replacement for both OLAP and relational reporting, can you therefore run both SQL and MDX queries against the UDM? The answer is a qualified no, because although AS2005 like AS2K does support querying using a very limited subset of SQL, it is a very limited subset and isn’t practically useful. You have to learn MDX to query your UDM or buy a tool that will generate MDX for you. I suspect that this is the point where many relational guys turn off, having realised that the UDM is the cube and that they’ll still have to learn a completely new, non-SQL technology.
 
This then leads on nicely to the question of whether OLAP is better off shoe-horned into the relational world and queried with SQL, as I understand Oracle have done with what used to be Express, or whether it’s better off as a distinct technology with its own query language as Microsoft have done. I touched on this topic a few months ago here, and as you might have guessed I’m in favour of the Microsoft approach. I don’t blame Microsoft for trying to blur this distinction though, as anything that will get people to look at AS2005 is a good thing in my book. It’s just that I’m not sure that your average BI customer can be hoodwinked in this way for very long, that’s all…
 
 

Written by Chris Webb

October 14, 2005 at 2:08 pm

Posted in Analysis Services

Analysis Services Book List (attempt #2)

with 3 comments

Since I can’t get my links to Amazon working in an MSN Spaces list, I thought I’d just put my book list in a regular entry and then update it as necessary.
 
Microsoft SQL Server 2005 Analysis Services: Irina Gorbach, Alexander Berger, Py Bateman, Edward Melomed

 
Updates/News:
Teo Lachev has announced that ‘Applied Microsoft Analysis Services 2005′ has gone to the printers and will be available by the end of November. More details and resources can be found here.
 
Mosha has announced that the second edition of ‘Fast Track to MDX’ is on the verge of publication. He has more details and some comments on other books on this list here.
 
Nick Barclay has a review of ‘Data Mining with SQL2005′ on his blog here. I’ve also just bought a copy and will be reviewing it as soon as I’ve read it properly! First impressions are good though.
 
If anyone wants to send me a free copy of their book for review (cheeky idea for getting free books, I know, but it might just work!) then please drop me a line at the email address mentioned in my profile.
 
Thanks to Nick Barclay again for the fact that ‘The Microsoft DataWarehouse Toolkit’ book (what’s listed above as ‘Data Warehousing with SQL 2005′ – I’ll update the link when Amazon UK updates its page for the book) has its own web page with some content.
 
Nick Barclay has a positive review of ‘Applied Analysis Services 2005′ on his blog here. Mark Hill also reviews it favourably here.
 
I have a review of ‘Data Mining with Analysis Services 2005′ here.
 
Nick Barclay has a review of ‘MDX Solutions’ second edition here.

Written by Chris Webb

October 10, 2005 at 4:43 pm

Posted in Books

The AS Dev Team wants your feedback

leave a comment »

Just spotted this post on the Analysis Services 2005 beta public newsgroup by Mosha (who I guess is a bit too busy with other work at the moment to put it in his blog), asking for feedback on MDX changes and performance. It’s good to see that the AS Dev team are as interested in engaging with customers as they always have been, but if I do have a criticism it’s that beta testers would be better able to test out new functionality if they actually knew what it was. I don’t want to sound too negative here but for instance I know that MDX Scripts have changed a lot over the last six months, and if the only information you had to work with was Richard Tkachuk’s white paper (which is now out of date in a few respects), Mosha’s blog and BOL you’d probably be struggling to understand what’s going on let alone implementing any apps which really push MDX Scripts to the limit.
 
I’ve been lucky in the amount of access I’ve had to Redmond to get my questions answered – Matt Carroll and Marin Bezic, take a bow – but I know from talking to other people that they’ve been frustrated at the lack of information available. I suppose the onus is on people like me, who do have the knowledge, to spread it around by blogging etc. Unfortunately I don’t have as much time as I’d like to blog or answer questions via email (I also have to work), and in any case I’m under obligation to my publishers and co-authors to save the really detailed explanations of new functionality for ‘MDX Solutions’. Similarly the dev team, although I know they make a really big effort, are obviously more focussed on building the product than writing about it. Maybe the SQL Server team needs to recruit some full-time bloggers to pump the information out to the community. Now that would be a cool job to have…

Written by Chris Webb

October 4, 2005 at 4:54 pm

Posted in On the internet

Dundas OLAP Services

leave a comment »

I see that Dundas have entered the market for ADOMD and ADOMD.Net client components with Dundas OLAP Services. It’s available in Windows Forms and ASP.Net flavours and although it doesn’t offer anything much in terms of functionality that isn’t already available, I’ll be taking a look because a) the web component looks prettier than most of the competition, which isn’t hard, and b) it’s from Dundas rather than a one-man-and-a-dog software company, so there’s less risk about future support.
 
UPDATE: you can see a live demo on Foodmart 2000 here. Having looked at it briefly, it’s as I thought – does nothing new, but those charts are nice to look at.

Written by Chris Webb

October 4, 2005 at 12:01 pm

Posted in Client Tools

Follow

Get every new post delivered to your Inbox.

Join 3,190 other followers