Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for November 2006

Last Night’s BI Event

leave a comment »

Just a quick note to say thanks to everyone who presented at and attended last night’s BI event. As ever it was good to see people in the flesh that I’d not met before (eg Charlie Maitland) as well as all the usual suspects…
 
I’ll try to get hold of the presentations and make them available for download, probably on http://sqlserverfaq.com/. In the meantime, David Parker’s presentation can be downloaded here:
It’s 6Mb though!

Written by Chris Webb

November 30, 2006 at 2:25 pm

Posted in Events

OLAP Warehouse Project

leave a comment »

As I mentioned in my last post, I had a demo of Intelligencia the other week and at the same time I had quite an interesting chat with the guy behind it, Andrew Wiles. He’s just emailed me to say that he’s just started blogging and on his blog he’s going to be sharing some work he’s been doing to build the same financial OLAP application in AS2005 and TM1 to compare the strengths and weaknesses of both platforms and establish some best practices for design. You can find out more about this project and download some files here:
 
 

Written by Chris Webb

November 27, 2006 at 5:24 pm

Posted in Analysis Services

Client Tools: The Next Generation

leave a comment »

After the Proclarity acquisition earlier this year, my thinking was that while it was a good thing overall for the Microsoft BI community it would have the negative effect of reducing choice and innovation. But in the months since I’ve been surprised to see a steady stream of new third party applications enter the market. Some, like RSInteract, I’ve mentioned before, but since there are quite a few now that are worthy of mentioning I thought I’d round up a few in a single posting.

Radius, from 90 Degree Software, has got a bit of exposure over the last few days thanks to Russell Christopher’s blog entry. Russell sums up its features pretty well and I have to say I was quite impressed by the various webcasts I’ve seen – I like the idea of being able to reuse pieces of other people’s reports while building your own. I’m not sure it supports AS as a data source at the moment, alas, but I was told that they are planning on doing so soon. While the general rubbishness of Report Builder can only help the prospects of Radius it also highlights the point I want to make here quite well: although Microsoft have now killed the market for traditional, vanilla BI apps, if you are innovative and execute your ideas efficiently there are still going to be openings for you.

Of course visualisation has been another area where client tools have sought to set themselves apart from the pack over the last year. I blogged about Tableau a while ago and Fractal:Edge the other week, and more and more other tools are improving their visualisation capabilities. Take a look at the pie-chart tree report or the data mining report demos at http://www.reportportal.com/ for instance. It’s not surprising that the likes of Dundas (disclosure: I recently signed up to their partner program and got a freebie chess set in the post from them this morning) are also entering this space – Dundas OLAP 5.5 just got released, and has some cool features like the ability to draw freehand over or add comments and arrows to a report; I see Chart FX have something similar, Chart FX OLAP, but I’ve not checked it out yet.

Another way to distinguish your product is to focus on a particular niche. I saw a demo of Intelligencia the other week and initially couldn’t see what it offered beyond any other more general-purpose tool. However it’s the details that are important, and this tool is targetted specifically at financial users who want to create printable reports for regulatory (Sarbanes-Oxley etc) reasons. This is something that you’d struggle to do with, say, Reporting Services, and having the tool as a Word 2007 addin makes a lot of sense; the OLAP querying functionality is also aimed at financial-style reports, making it easy to build up asymmetric sets of tuples for example. Also in the financial apps space is the more general CPM tool Calumo; looks impressive and is probably more sophisticated than Biz#/PerformancePoint will be, and I suppose the advantage of being a small software company in this area is that you can respond quickly to your customers’ requests and try to stay one step ahead of Microsoft.

So, then, a thousand flowers are blooming and all that. Having cool features does not necessarily entail commercial success but I wish all these tools well and I’m pleased to see that there’s still a lot of diversity out there – it can only benefit us all. And remember, if you’ve got a product that uses Analysis Services and you want a bit of free publicity on this blog then please send me details…

Written by Chris Webb

November 27, 2006 at 5:08 pm

Posted in Uncategorized

DDD4

leave a comment »

Just thought I’d mention that I’ll be speaking at Developer Developer Developer 4 a week on Saturday:
…which I’m really looking forward to (must polish up my demos beforehand though). If you’re going to be there too, please say hello!
 
I’ve also been asked to pass on the following message from the organisers to anyone who’s going:

For DDD4 we are organising Grok Talk and Park Bench lunch time activities at which we hope to encourage delegates to participate. You can help!

The Grok Talks are open to anyone, a wiki has been setup where people can put down their name and the talk and/or suggest topics for the Park Bench. If you are attending DDD4 we would like to invite you to submit a Grok Talk. As we have had a low response so far we would appreciate it if you could blog about this opportunity or mention it to anyone you know who might be interested. As a reward on the day a MSDN subscription will be given for the best Grok Talk.

Wiki link – http://www.thehughpage.com/DDD4LunchTime 

DDD link – http://www.developerday.co.uk/ddd/default.asp

 
Oh, and don’t forget about next week’s BI evening in Reading:

 

 

Written by Chris Webb

November 23, 2006 at 6:02 pm

Posted in Events

Processing Dimensions By Attribute and By Table

with 3 comments

It was recently pointed out to me (by Hugo Lopes and Paulo Faria of Enabler in Portugal, who I had the pleasure of working with recently) that setting the ProcessingGroup property of a dimension can have quite a big impact on the amount of time it takes to process a dimension. By default this property is set to By Attribute, which means that each attribute in the dimension is processed separately using its own SQL Select statement which returns the distinct values for each member on the attribute. Here’s an example query used to process an attribute based on the Adventure Works Product dimension:

SELECT
DISTINCT
[dbo_DimProduct].[Color] AS [dbo_DimProductColor0_0]
FROM [dbo].[DimProduct] AS [dbo_DimProduct]

If you set it to By Table, however, in some circumstances (when your dimension is built from a single table?) only one SQL Select statement per table in the dimension structure is issued, reading all the data in, with AS presumably finding the distinct values itself.

When do you use which setting? I suppose it depends on a lot of things (performance of your data source, size and structure of your dimension, whether you’re processing other objects at the same time) so it’s going to be one of those things you’ll have to experiment with. Obviously most dimensions process extremely quickly anyway, but if you’ve got a large dimension which you need to process as quickly as possible this might come in handy.

Written by Chris Webb

November 21, 2006 at 9:44 pm

Posted in Uncategorized

Optimising Distinct Count

leave a comment »

Great blog entry from Denny Lee on optmising distinct count measures here:
 
I’d seen a ppt deck of a presentation he did at PASS a few years ago where he first set out these ideas, but it looks like he’s now updated them for AS2005.
 
On a similar topic, I’ve been working with a customer recently where we needed to optimise a calculated measure which did a count on a distinct list of customers (>1 million in the dimension) who met a certain criteria. The filter couldn’t be designed into the cube or dimension completely so it had to be done with a filter function, and similar to what Denny describes here I was seeing in Profiler that the data was being read of the disk pretty quickly but it was the filter/count that was taking the time because only one CPU was being utilised. Unfortunately I found out that there’s no way to design parallelism into the formula engine in the same way that Denny’s technique above designs parallelism into the work the storage engine has to do – apparently AS prevents two queries executing at the same time unless one of them is a Cancel, and despite experimenting with sprocs to do this I had no luck because none of the code in the formula engine is thread safe. I was able to pass the raw data into a sproc using SetToArray and do the work outside the formula engine but that performed even slower. Hohum…

Written by Chris Webb

November 19, 2006 at 11:49 am

Posted in On the internet

Custom Reports in SP2

with one comment

There have been a few blog entries about the ability to add your own custom reports to SQL Management Studio in SP2, for example:
 
All very database-engine orientated, but it’s worth pointing out that you can do exactly the same thing when you’re connected to Analysis Services. Just right-click on an object and choose ‘Reports’. The built-in reports are very basic but I think there’s scope there for creating some really useful ones: for example, it would be cool to be able to visualise the attribute relationships on a dimension (creating diagrams rather like the ones Mosha has on his post here) or the aggregation designs stored on a measure group. If only I had the time…

Written by Chris Webb

November 17, 2006 at 1:46 pm

Posted in Analysis Services

Follow

Get every new post delivered to your Inbox.

Join 3,240 other followers