Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Why not to use the external assembly approach for dynamic security

with 7 comments

If you’re familiar with dynamic security in SSAS, you probably know there are two basic ways of implementing it: the measure group approach, where security credentials are stored in a measure group inside the cube and the MDX set expression in your role queries this measure group to work out which members each user has access to; and the external assembly approach where the role uses a custom function MDX function from an external assembly to get the same information. Both approaches are covered in some detail in “Expert Cube Development” (still available in all good bookstores, BTW) but I’ve always preferred to use the measure group approach because a) I’m a rubbish .NET developer, and much more at ease modelling data into a measure group, and b) I’ve heard lots of complaints about opening a connection being really slow when you’re using the external assembly approach.

Anyway, to cut a long story short, I had the pleasure of meeting Gerhard Brückl (you may know him from the SSAS MSDN Forum) at SQLBits the other month and he had an interesting question for me: he’d been using the external assembly approach and found that once a user had connected to the cube once their security credentials were cached and the assembly wasn’t called again, even if the user disconnects and reconnects, until the cube’s cache was cleared. While this is good thing from a performance point of view, it has one big drawback as Gerhard pointed out: if the user’s security rights subsequently change the cube will not pick this up immediately. And of course this means that a user may have access to data they really shouldn’t have access to – which is a potentially big problem.

This initiated an interesting discussion with Akshai Mirchandani and various other SSAS obsessives about whether this could be worked around. The short answer is that it can’t be, at least not easily. You can execute an XMLA ClearCache on the whole cube but this might affect query performance quite significantly; you can reduce the impact by clearing the cache of just one measure group in your cube (you could even create a dummy measure group for this purpose) since this will clear the scope cache where the permissions are stored, without affecting the SE caches of the other measure groups. But you’re still left with the problem of knowing when to clear the cache: you’d need to write some code somewhere that checks to see if permissions have changed and runs a ClearCache if they have. As a result of all this, I’d say that if you’re using dynamic security, your permissions change regularly and you need the cube to reflect these changes as soon as possible, you should avoid using the external assembly approach altogether and use the measure group approach instead. 

Written by Chris Webb

May 9, 2011 at 10:02 pm

Posted in Analysis Services

Tagged with

7 Responses

Subscribe to comments with RSS.

  1. For me, performance and low latency updates are just more reasons to use measure groups for dynamic security. I would encourage BI developers to use measure groups for dynamic security for its simplicity of architecture, and ease of testing/debugging.

    Richard Lees

    May 10, 2011 at 5:19 am

  2. I have an interesting requirement for dynamic security, and I’d like to use measure groups but I’m not sure it’ll work. We have a single fact table, and for simplicity, a single Organization dimension. The problem now is that Organization role plays several different roles for each fact (examples like Supporting Org, Primary Org, Contracting Org, etc). If I’m a member of company A then I need to see all facts where company A plays some role. So for example, if I have a fact that says Company A is the Supporting Org and Company B is the Primary Org then I need to see it, but I shouldn’t see any facts where Company B is the supporting Org and Company C is the Primary Org. See what I mean? I have a copy of your Advanced BI book, so I have pondered cell based security but I worry it’s going to kill performance (we have hundreds of millions of rows). Are there any good ways of performing an “OR” across role playing dimensions for security? So far the only thing I’ve come up with is to create an artificial dimension that would essentially be the distinct combination of the various organizations and then link the fact table to that dimension, followed by having a factless security table that’d link user IDs to that artificial dimension and the ETL would do the work of determining if a user should have access to that “combination” of Orgs. This seems like it could potentially result in a dimension with millions of records, followed by a factless security table with tens of millions of records (maybe hundreds of millions). Any ideas are greatly appreciated, thanks.

    Mike Pugh

    December 20, 2011 at 7:32 pm

    • I worked on exactly the same problem last week, strangely enough, and we created an artificial dimension in exactly the way you describe. However in this case the artificial dimension was reasonably small so we didn’t encounter any of the problems you describe. Also there’s no need to build a factless fact table with millions of records – a factless fact table where a user is linked to a single company is sufficient, and you can then use LinkMember() to work out which members on the artificial dimension to grant access to. Hmmm, this could be an idea for a blog post… I tell you what, I’ll write it up this week.

      Chris Webb

      December 20, 2011 at 9:21 pm

      • Thanks so much for the very quick reply – I’ll investigate the LinkMember() notion and I’ll look forward to your write up.

        Mike Pugh

        December 20, 2011 at 9:28 pm

  3. [...] it’s appropriate to adress the dynamic external assembly solution that exists. See an example by Chris Webb (blog) and read the main reason why we did not choose this [...]

  4. I’ve tried both Assembly approach and MDX+Username() approach and can’t get it working to reflect the changes in the “bridge table”.
    In my experience the following:

    “if the user’s security rights subsequently change the cube will not pick this up immediately”

    is true for both approaches!

    I’ve also tried clearing the cache and nothing happens.
    I’ve posted more precise explanation on stackoverflow:

    http://stackoverflow.com/questions/13759541/analysis-services-does-not-clear-the-cache-as-instructed-dimension-dynamic-secu

    Can that be done?
    Can SSAS permission actually reflect the permissions stored in the relational database, so to react to when the content of relational table with the permissions changes.

    Or is it necessary to process everything all over again?
    ROLAP seemed like a reasonable solution, but that didn’t work either for me (see comment on stackoverflow)

    Thanks,

    Igor

    Igor

    December 7, 2012 at 4:00 pm

    • Hi Igor,

      Clearing the cache should work. From what you say it sounds like your XMLA is executing successfully – but it doesn’t sound as though it’s clearing the cache. Can you post the XMLA you’re using? Are you clearing the cache of the entire database?

      If you’re using the MDX+Username() approach, what you should be doing is reprocessing only the measure group based on your security bridge table. This will clear the SE cache for that measure group and the entire calculation cache.

      Chris

      Chris Webb

      December 10, 2012 at 3:33 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,310 other followers

%d bloggers like this: