Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for November 2009

Free version of Microstrategy Reporting Suite for SSAS

with 4 comments

Here’s a cheeky move by Microstrategy: they’ve made the free version of their Reporting Suite work for Analysis Services. More details and a download link here:
http://www.microstrategy.com/freereportingsoftware/learnmore/microsoft-analysis-services-ssas.asp

I’ve not tried it so I don’t know whether it’s any good or not, but it’s free and you can have up to 100 users, so it will be worth checking out. Of course this is Microstrategy trying to hurt Microsoft and its partners, but, well, it’s free…

Written by Chris Webb

November 25, 2009 at 8:23 pm

Posted in Client Tools

SQLBits V Summary & Thanks

leave a comment »

Another SQLBits – the fifth! – has come and gone, and I wanted to say thanks to everyone that helped to make it such a success. It was a three-day event for the first time this time, which meant that there was even more organisation work needed, but looking back I think it all went extremely smoothly and was by far the slickest conference we’ve put on so far. My colleagues on the organising committee, Simon Sabin, Martin Bell, Allan Mitchell, Darren Green and James Rowland-Jones, are a great bunch of people and it’s always a pleasure to work with them on SQLBits even if it does take a big chunk out of our spare time. Thanks are also due to the speakers and sponsors, without whom the event would not be possible, and I’d also like to highlight the people who volunteered to help out doing unglamorous things like stuffing the attendee bags and room monitoring – Rachel Clements, Jon Reade,  Gary Short, Rachel Hawley, Richard Douglas, Luke Hayler and many others.

If you came to SQLBits I hope you enjoyed it, and if you did enjoy it please let everyone know by blogging and twittering about it. Please also join the SQLBits groups on LinkedIn and Facebook, and if you’ve got any pictures of the event why not post them on the latter?

Anyway, it’s time to crack on with some real work and deal with the big pile of emails that has built up over the last few weeks. I’ve missed a whole bunch of big announcements I would otherwise have blogged about but I’m sure you’ve caught them elsewhere… I must get round to downloading and installing the latest Powerpivot CTP etc.

See you at the next SQLBits!

Written by Chris Webb

November 24, 2009 at 5:39 pm

Posted in Events

Pinpoint and Dallas

with one comment

Interesting news from PDC: Microsoft has announced two new services – Pinpoint and Dallas.

You can find Pinpoint here: http://pinpoint.microsoft.com

Here’s the blurb from the site:

Pinpoint is the fast, easy way for business customers to find experts, applications, and professional services to meet their specific business needs—and build on the software they already have.

At the same time, Pinpoint helps developers and technology service providers quickly and easily get software applications and professional services to market—and engage customers who need what they offer.

Pinpoint is the largest directory of qualified IT companies and their software solutions built on Microsoft technologies.
  • More than 7,000 software application offerings.
  • More than 30,000 Microsoft-technology experts.
  • The largest, most diverse set of Microsoft business platform offerings in the industry in a central location.
  • Direct links between applications and the services that support them.

Whether you’re searching for expert help or offering it, Pinpoint helps you easily find and engage the right people and technologies to get the job done.

 

Much, much more interesting from a BI point of view is Dallas, which is part of Pinpoint: http://pinpoint.microsoft.com/en-US/Dallas

It’s Microsoft’s marketplace for data, all built on Azure. Again from the blurb:
Microsoft Codename “Dallas” is Microsoft’s Information Services business, enabling developers and information workers to instantly find, purchase, and manage Web services and datasets to power the next set of killer applications on any platform.

The Register has the best write-up of what this is here: http://www.theregister.co.uk/2009/11/17/microsoft_dallas_data_service/

From that article:
Dave Campbell, a Microsoft technical fellow, demonstrated Dallas at PDC. He showed a list of data provides from the partners such as infoUSA, subscriptions, the ability to store structured and unstructured data, and to explore the data without needing to parse it, to preview the data in ATOM, invoke the data as a Rest service and analyze the data using PowerPivot in Microsoft’s Excel spreadsheet program.

Note my emphasis on the last sentence! Here at last is the ability to buy that third party data that’s been a part of every Powerpivot demo. I’ve worked with a lot of companies that sell data in my career, and this looks like it could be a very significant development for them. I’d even heard vague rumours that MS were interested in buying commercial data providers at one point, several years ago – if they were prepared to go this extreme then it would certainly go a long way to making this strategy a success.

Now just think how cool it would be if SSAS or PowerPivot could be hosted on the cloud, so all you needed was Excel to analyse this data. Maybe one day…

Written by Chris Webb

November 17, 2009 at 9:12 pm

Posted in Cloud

SQLBits Agenda Published

leave a comment »

At long last, the SQLBits agenda has been published:
http://www.sqlbits.com/information/newagenda.aspx

It’s not too late to register, even though SQLBits is only next week. It’s looking like it will be the largest event yet in terms of attendance… If you’re coming and you see me around, say hello!

UPDATE: car sharing is live now too - http://sqlbits.com/CarSharing.aspx

Written by Chris Webb

November 13, 2009 at 5:06 pm

Posted in Events

Ragged Hierarchies, HideMemberIf and MDX Compatibility

with 46 comments

Here’s something that I suspect a few people out there will be surprised I didn’t know – but as far as I’m concerned, if I didn’t know it then it’s worth blogging about.

Anyway, it regards ragged hierarchies, the HideMemberIf property and the MDX Compatibility connection string property. Now you probably know that if you want to turn a user hierarchy into a ragged hierarchy (perhaps to avoid using a parent child hierarchy) you need to use the HideMemberIf property on the user hierarchy. For example, imagine you were using the following SQL query as the basis of your customer dimension:

SELECT        ‘UK’ AS Country, ‘Bucks’ AS State, ‘Amersham’ AS City, ‘Chris Webb’ AS Customer
UNION ALL
SELECT        ‘Italy’ AS Country, ‘BI’ AS State, ‘Biella’ AS City, ‘Alberto Ferrari’ AS customer
UNION ALL
SELECT        ‘Vatican’ AS Country, ‘Vatican’ AS State, ‘Vatican’ AS City, ‘The Pope’ AS customer

We could build a dimension off this with attributes for Country, State, City and Customer, and for two out of our three customers that would be fine. However the Pope lives in the Vatican, which is (at least for the purposes of this exercise) a Country with no concept of City or State; and in the case of customers who live in the Vatican, we just want to be able to drill down on the Country ‘Vatican’ and see all of the Customers who live there without drilling down through a meaningless State and a City.

So what we can do is build a user hierarchy on our dimension with levels Country, State, City and Customer, and on the lower three levels set the HideMemberIf property to OnlyChildWithParentName:

image

Then, with any sensible client tool, we can connect to the cube and browse the dimension as we want:

image

I saw ‘sensible’ client tool, because of course this only works if you set:
MDX Compatibility=2
…in the connection string. And of course Excel 2007 hard-codes MDX Compatibility=1 in the connection string and doesn’t allow you to change it, so you can’t use ragged hierarchies properly.

This much I knew.

However, what I didn’t realise until last week when I was moaning about this to TK Anand from the SSAS dev team at PASS, is that for some ragged hierarchies you don’t need to set the MDX Compatibility connection string property at all.

For example, if in our case we duplicate the Customer upwards rather than the Country downwards, like so:

SELECT        ‘UK’ AS Country, ‘Bucks’ AS State, ‘Amersham’ AS City, ‘Chris Webb’ AS Customer
UNION ALL
SELECT        ‘Italy’ AS Country, ‘BI’ AS State, ‘Biella’ AS City, ‘Alberto Ferrari’ AS customer
UNION ALL
SELECT        ‘Vatican’ AS Country, ‘The Pope’ AS State, ‘The Pope’ AS City, ‘The Pope’ AS customer

…and then build the dimension, setting HideMemberIf on our user hierarchy to OnlyChildWithParentName, we can get the result we want without setting the MDX Compatibility property. Here’s a screenshot of this new dimension in Excel just to prove it:

image

The difference here is that we’re hiding all members below the State level right down to the bottom of the hierarchy, rather than hiding members somewhere in the middle of the hierarchy. Truly, this is one of those “Doh, if only I’d known!” moments… this at least means that in some of the scenarios where you’d use ragged hierarchies you can get them to work with Excel, even if it means that we have to hack the data (‘The Pope’ is a Customer, not a State or a City). On the other hand there are plenty of scenarios where you do need to hide members in the middle of a hierarchy, and frankly I don’t see why Excel 2007 can’t set MDX Compatibility=2 in its connection string so they work properly.

Written by Chris Webb

November 11, 2009 at 11:36 pm

Posted in Analysis Services

PASS Summit Thoughts

with 10 comments

The PASS Summit is over for another year and I’m just starting out on the long trip back home, so there’s plenty of time to get my thoughts together on what’s happened over the past week. In fact there’s not much to say about the event itself: it was, as ever, a lot of fun and totally worthwhile. Hey, within 30 minutes of arriving at the conference I learned I’d won an award for the best BI-related blog entry, for my post on implementing real SSAS drilldown in SSRS!

Attendance was up from last year although probably the recession still took its toll: remember that there was no BI Conference this year and I would have thought that a lot of people who would have gone to it would have gone to PASS instead. To be honest I think not having a BI Conference is a good thing, actually. I don’t like having to choose which conference to attend, and part of the benefit of a conference is to get as many members of a tech community together in one place. And this was certainly the largest gathering of Analysis Services people I’ve ever seen: all the usual crowd were there, I met a lot of people who I’d only met a few times before, and I finally got to meet Darren Gosbell in person after having known him by email for at least five years. One complaint I would make about the event was that the sessions weren’t scheduled particularly well. I know everyone always complains about this but in this case it did seem worse than usual: my session, for example, was up against two other SSAS-specific sessions, but in other cases there were time slots with no SSAS content at all.

The other benefit of PASS is that you get to talk at length about what’s going on in the world of SQL Server with other like-minded people. As a result you get to crystallise your thoughts on a lot of matters and – guess what – I’m going to share mine here.

First of all, the topic that was on everyone’s lips was PowerPivot. In fact everyone at the conference must have seen the standard demo at least five times and there were also a lot of advanced sessions on it too. Don’t get me wrong, I really think PowerPivot it cool from a technology point of view, I am going to take the time to learn it, and I also think from a make-money-by-getting-people-to-upgrade-to-Office-2010 point of view it is a very clever move for Microsoft. But my feelings about it remain ambiguous. Quite apart from the arguments about it discouraging ‘one version of the truth’ and encouraging spreadmarts that have already been discussed ad nauseam, I have another problem with it: I don’t honestly know whether I, as a consultant, will be able to make any money from it. The very nature of it, as a self-service tool, means no expensive outside consultancy is necessary. I don’t think it will take business away from me though; it will be widely used and it will be used instead of regular SSAS for more basic projects, but the more serious stuff will stay with SSAS I hope. I think the need for sophisticated security and more complex calculations will be the deciding factor when people choose between SSAS and PowerPivot; I’m not sure I see many people upselling from PowerPivot to SSAS either. We’ll see.

Something that worries me more about PowerPivot is the fact that it seems to have diverted the attention of the SSAS dev team. For SSAS 2008 we had few new features, although the performance improvements were very welcome. For 2008 R2 I can only think of one new feature in SSAS, and that’s the ability to use calculated members in subselects that will allow Excel 2010 to use time utility dimensions properly (I’ll blog about that at some point). Even though work on good old server-side SSAS will resume for the next major release of SQL Server I worry that PowerPivot will take priority in the future. If this happened it would be bad for me and other BI partners from a business point of view, and seems crazy given that SSAS has been such a successful product in the enterprise sector; it’s not like there aren’t a lot of new features and fixes that could be done. Shades of IE6 and Microsoft getting complacent once it’s cornered a market, I think.

Last of all on PowerPivot, I suspect that there is something new relating to it in the roadmap that hasn’t been announced yet. David DeWitt devoted his keynote on Thursday to it, the specifics of column-store databases and the Vertipaq engine (which is the new in-memory storage engine that PowerPivot uses), and at the end hinted at this saying that although he couldn’t make any announcements, those people who had been paying attention might have some ideas on what the future held for it. Of course I hadn’t been paying attention properly, but the obvious thing would be to integrate it with the relational database somehow. Given that PowerPivot is now being hosted inside Sharepoint, why not host it in SQL Server too? It’s already very table and join friendly, and I could imagine a scenario where it was used inside SQL, pointed at a schema, some kind of proactive caching kept the data in SQL in synch with the data in the Vertipaq store, difficult BI calculations could be expressed in DAX, but the whole thing was transparent to TSQL. Imagine integrating that with Madison too!

Moving on, the other thing that has become clear to me is that I really have to sit down and learn Sharepoint (or at least the relevant bits of it) properly. It’s at the heart of Microsoft’s BI strategy and there’s no avoiding it. I have to admit to some mixed feelings about this move though, and I know other people I talked to at the conference share them. Partly it’s because, in the past, there were BI specialists and there were Sharepoint specialists and we didn’t necessarily have much to do with each other; now,  though, the two worlds are colliding and I’m outside my comfort zone. You might say that Sharepoint has been part of the MS BI strategy for ages now, what with PerformancePoint etc, but I see an awful lot of MS BI customers in my work and I very rarely seem to see any Sharepoint, although it could be because I’m not looking out for it. A more valid objection is that the need for Sharepoint Enterprise Edition CALs adds a lot of extra cost to a project; and from a technical standpoint Sharepoint itself carries a very big overhead – its installation and maintenance may put a lot of customers off if they don’t already have a company-wide Sharepoint strategy, and if they do have one they may not be willing to go to 2010 for some time. Sharepoint might be just too big for some customers to swallow, and be a difficult sell for BI partners.

I’d like to stress though, once again, that I see the considerable technical benefits for using Sharepoint for BI, and even if the reception of the latest wave of PerformancePoint has been somewhat muted (eg the realisation that the decomposition tree has been tacked on at the last minute and isn’t properly integrated) I am impressed with what’s coming with Excel 2010 and Excel Services too; for example I think the Excel Services REST API is very cool indeed, and as a SSAS client Excel 2010 is a big improvement on 2007 (which wasn’t all that bad either). I’ve decided I also need to learn Excel properly now as well – get to know all those advanced Excel functions, use Solver and all that. Once again two worlds are colliding: the Excel guys and the SSAS guys are going to have to learn a lot more about each others’ technologies for truly effective BI applications to get built.

Anyway, I think this post has gone on quite long enough now. As always, your comments on everything I’ve written here would be much appreciated.

Written by Chris Webb

November 9, 2009 at 10:57 am

Posted in Random Thoughts

Live Blogging @PASS – SSAS Consolidation and Virtualisation

with one comment

Here are some notes from the SQLCat team’s session on SSAS consolidation and virtualisation; they’re a bit fragmentary since I’m too busy paying attention to what’s being said! I get asked about these issues by my customers all the time.

  • Use Windows System Resource Manager to control how many resources SSAS can use. For more on WSRM see http://technet.microsoft.com/en-us/library/cc755056.aspx
  • If SSAS and SQL are on the same server, use the Shared Memory protocol to improve processing performance
  • Also since resource usage requirements for SSAS and SQL will be different when processing and when querying, if they’re on the same box you can use WSRM to dynamically change resource allocations at different times.
  • Consolidating multiple SSAS databases on the same machine, it can be good to use multiple instances (maybe one per database) on the same machine to give fine control over resource usage, service packs etc.
  • Someone asked the question of whether there is an overhead to using multiple instances each with one database rather than using a single instance with multiple databases. Answer: multiple instances would perform better but use slightly more resources; better to start with a single instance and only move to multiple instances when you have a good reason to do so.
  • Tests run comparing SSAS running on bare metal and Hyper V – no difference in performance between the two for querying, but for the Storage Engine (processing and SE activity when querying) you use 1.5 times more threads on Hyper V (can modify the default number of threads available for processing, may therefore need to change this for Hyper V).
  • Description of a custom-built system for load balancing SSAS developed by the MSSales team inside Microsoft. Code and white paper will be available in a few months.
  • There were various issues with Synchronization in SSAS that have been fixed in the late CUs.
  • IIS7 performs much better for HTTP access to SSAS – performs as well as a direct connection. I’m sure I also heard somewhere that there were some performance issues for HTTP access that were noticeable over a slow network that have also been fixed in the latest CUs.

The most useful session so far at this conference for me – I learned a lot.

Written by Chris Webb

November 5, 2009 at 8:00 pm

Posted in Analysis Services

Quest add support for SSAS monitoring

leave a comment »

Something I saw yesterday at PASS: Quest now have support for monitoring SSAS from their “Spotlight on SQL Server Enterprise” product. See

http://www.quest.com/newsroom/news-releases-show.aspx?contentid=10602
http://www.quest.com/spotlight-on-SQL-Server-enterprise/features-benefits.aspx

It’s pretty basic at the moment – they capture some Perfmon counters and data from schema rowsets, but no trace data – and nowhere near as sophisticated as what SQLSentry have, but it’s good to see another vendor entering this market.

Written by Chris Webb

November 5, 2009 at 5:30 pm

Posted in Analysis Services

Live Blogging @PASS – SQL Server BI in the Cloud

leave a comment »

Some notes/thoughts while I’m listening to John Welch’s session here at PASS on “SQL Server BI in the Cloud”. The room is packed… full marks to John for picking such a hot topic to speak on!

  • Summary of reasons why the cloud is interesting for BI – easy scaling, setup, sizing etc.
  • Distinction between ‘virtualised’ and ‘hosted’ services.
    • Virtualised = pay on usage, instant scale, reduced scaling concerns
    • Hosted = buy a set capacity
  • Azure – making the point that, unlike most other cloud offerings, you can leverage your existing (SQL Server) skills
  • Notes that other parts of the BI stack, apart from the relational engine, have been promised for the future. My feeling is that when/if SSAS in the cloud appears, it’s more likely to be PowerPivot in the cloud; note also that SSRS in the cloud has kind of already appeared with Access Services.
  • BI scenarios not really considered so far by the Azure team. I echo John’s response of “Why???”
  • Description of the Azure architecture. I was talking to someone last night about the way Azure requires use of SQL authentication (which MS have discouraged us from using for years!); SSAS of course only supports Windows authentication, which would be a problem for SSAS in the cloud, so I wonder if in the future we’ll get username/password authentication for SSAS?
  • Limitations of Azure: 10Gb max data, query limit of 5 minutes, insert/update slow. Though for some, smaller, short-lived BI solutions Azure is a perfectly good solution; sharding is an option too.
  • Shows SSRS (locally) working against data from Azure. Works better in CTP2 but still occasional bug.
  • Before the presentation started I asked John if he’d tried using SSAS in ROLAP mode against Azure; he said he had and it worked, but it was v. slow (as you’d expect).
  • Using SSAS in MOLAP mode, since processing queries are v. slow and there’s a query timeout of 5 mins, you need to create lots of small partitions  to ensure processing queries finish as quickly as possible. Proactive caching can’t use automatic notifications.
  • SSIS out of the box support coming in R2. At the moment, SSIS doesn’t support bulk insert operations to ADO.Net destinations. 

Written by Chris Webb

November 4, 2009 at 7:20 pm

Posted in Cloud

Live Blogging @PASS – Master Data Services

leave a comment »

I’m currently in John McAllister’s session on Master Data Services at the PASS Summit, and here are some notes…

  • The first public beta is due next week
  • MDS is will be packaged with 2008 R2 (rather than Sharepoint), be on the DVD but not part of the main install
  • Will have an API – everything you can do in the UI, you can do in the API
  • Although it’s part of SQL Server it will still rely on Sharepoint for workflow; the main, web-based UI is not Sharepoint-based though
  • Has simple business rule validation capabilities, eg make sure that the list price of a product is greater than its cost
  • Includes basic documentation features
  • Also has auditing features – you can see every transaction ever made in the system, reverse changes made and so on
  • Models are containers for different types of data (products, customers); every model can have a version, and versions can be locked, open for editing etc; models can also be secured
  • Also has basic notification features, so users/groups can get emails when something changes
  • No direct SSAS integration at the moment, but they hope to have some in the future

Written by Chris Webb

November 3, 2009 at 7:17 pm

Follow

Get every new post delivered to your Inbox.

Join 2,868 other followers