Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for April 2013

PowerPivot Workbook Size Optimizer

with 3 comments

Browsing through my RSS feeds this morning, I saw a new download on the Microsoft site: an Excel addin (Excel 2013 only, I think) called the PowerPivot Workbook Size Optimizer. You can get it here:
http://www.microsoft.com/en-us/download/details.aspx?id=38793

Here’s the blurb from the site:

The Workbook Size optimizer for Excel can better compress data inside workbooks that use PowerPivot or PowerView if this data comes from external data sources. The best size compression can be achieved for workbooks based on SQL Server databases and there are a few tricks we can do for other SQL datasources as well. The optimizer will install as an add in to excel and will provide you with a nice wizard to better compress the size of your workbook. Using the optimizer you can often get more than 1,000,000 rows datasets in a workbook under 10 MB, share it in SharePointOnline and interact withit using the Excel Web App in any browser.

Here’s a screenshot:

image

Despite a testing a few models with data from Adventure Works I couldn’t get it to suggest any changes (it didn’t spot that I had imported a column containing binary data, hmmm) but I guess it needs more testing on larger/more diverse data sources. Maybe there’s a blog post coming from the PowerPivot team coming soon explaining how to use this?

UPDATE: after playing around with it a bit more, I was able to get it to suggest some changes to tables. Marco has some more details:
http://sqlblog.com/blogs/marco_russo/archive/2013/04/30/powerpivot-workbook-size-optimizer-powerpivot-tabular.aspx

And there’s a white paper on the rules that it uses:
http://office.microsoft.com/en-gb/excel-help/create-a-memory-efficient-data-model-using-excel-2013-and-the-powerpivot-add-in-HA103981538.aspx

Written by Chris Webb

April 30, 2013 at 9:52 am

Posted in Excel, PowerPivot

A Partly Successful Attempt To Create Life With Data Explorer (Power Query)

with 5 comments

NOTE: This post was written before Data Explorer was renamed as Power Query. All of the content is still relevant to Power Query.

I’ll apologise for the title right away: this post isn’t about a Frankenstein-like attempt at creating a living being in Excel, I’m afraid. Instead, it’s about my attempt to implement Jon Conway’s famous game ‘Life’ using Data Explorer, how it didn’t fully succeed and some of the interesting things I learned along the way…

When I’m learning a new technology I like to set myself mini-projects that are more fun than practically useful, and for some reason a few weeks ago I remembered ‘Life’ (which I’m sure almost anyone who has learned programming has had to write a version of at some stage), so I began to wonder if I could write a version of it in Data Explorer. This wasn’t because I thought Data Explorer was an appropriate tool to do this – there are certainly better ways to implement Life in Excel – but I thought doing this would help me in my attempts to learn Data Explorer’s formula language and might also result in an interesting blog post.

Here’s the code I came up with eventually:

let

    Source = Excel.CurrentWorkbook(){[Name="Input"]}[Content],

    GetAction = Excel.CurrentWorkbook(){[Name="Source"]}[Content]{0}[Source],

    GetLastOutput = try Excel.CurrentWorkbook(){[Name="Output"]}[Content],

    ChooseInput = if GetAction="Input" or GetLastOutput[HasError]

        then Source else GetLastOutput[Value],

    Unpivot = Table.Unpivot(ChooseInput,{"1", "2", "3", "4", "5", "6"

        , "7", "8", "9", "10"},"Column","Value"),

    InsertedIndex = Table.AddIndexColumn(Unpivot,"Index"),

    PivotedRows = Table.RowCount(InsertedIndex),

    OriginalColumnCount = Table.ColumnCount(ChooseInput),

    GetValue = (i) =>; if i<0 or i>=PivotedRows then 0 

        else InsertedIndex[Value]{i},

    GetNeighbours = (i) =>; GetValue(i-1) + GetValue(i+1) 

        + GetValue(i-OriginalColumnCount) + GetValue(i+OriginalColumnCount)

        + GetValue(i-OriginalColumnCount-1) + GetValue(i-OriginalColumnCount+1)

        + GetValue(i+OriginalColumnCount-1) + GetValue(i+OriginalColumnCount+1),

    NeighbourCount = Table.AddColumn(InsertedIndex, "Neighbours",

        each GetNeighbours([Index])),

    Custom1 = Table.AddColumn(NeighbourCount , "NewValue",

        each if [Value]=1 and ([Neighbours]=2 or [Neighbours]=3)

        then 1 else if [Value]=0 and [Neighbours]=3 then 1 else 0),

    HiddenColumns = Table.RemoveColumns(Custom1,{"Value", "Index", "Neighbours"}),

    Custom2 = Table.Group(HiddenColumns, {"Column"},

        {{"Count", each Table.Transpose(Table.RemoveColumns(_,"Column")), type table}}),

    #"Expand Count" = Table.ExpandTableColumn(Custom2, "Count", {"Column1", "Column2"

        , "Column3", "Column4", "Column5", "Column6", "Column7", "Column8", "Column9"

        , "Column10"}, {"1", "2", "3", "4", "5", "6", "7", "8", "9","10"}),

    HiddenColumns1 = Table.RemoveColumns(#"Expand Count",{"Column"})

in

    HiddenColumns1

You can download the associated Excel 2013 workbook here.

Some explanation of the code:

  • The first problem that I faced was that in Life you need to start with a grid of cells, apply the rules of the game, and then output the results to the same grid. This is a problem for Data Explorer because, unless you have already run a query, the output table doesn’t exist, and even if you have you can’t update the values in that table without them being overwritten. So I decided to create a table where you can enter the initial state of your grid as a series of 1s and 0s. It’s on the Input sheet and its called Input:

    image

  • Whether the Input table is used as the starting point for the query depends on the contents of yet another table, called Source, on the sheet called Output in the workbook. It can hold one of two text values: “Input” or “Output” (I used Excel’s Data Validation functionality to lock this down), and the ChooseInput step then determines whether the Input table or the table called Output is used as the input for the Data Explorer query. Here’s what the Output worksheet looks like:image

    The workflow is therefore as follows: enter the starting point on the Input table, make sure the Source table shows “Input”, refresh the query, change the Source table to “Output” and then refresh the query to show each subsequent iteration.

  • The Output table simply displays the same values as the table that shows the results of the Data Explorer query. I created a separate table for two reasons: one, I wanted to use custom formatting to show the results; and two, to try to work around the big problem that I eventually found I couldn’t work around completely, which is that Data Explorer doesn’t actually support this type of recursive query (ie scenarios where the output of a query is also used as the input). This thread on the Data Explorer MSDN Forum has some details on the problem. I found I got errors after just about every other step when using the Data Explorer-created table as the input, whereas the errors were much less frequent if I duplicated the values in a separate table. It now works most of the time, but it still errors far too much for my liking. Hohum.
  • I did also use Data Explorer’s error-handling functionality, using the try statement in the GetLastOutput step, to solve this problem but it made no difference. It was good to find out how a try statement works: basically when you use it, the return value is a record containing two values, one which indicates whether an error occurred in the try, and the other the value that was returned if there was no error. You can see me checking these values in the ChooseInput step, with the calls to GetLastOutput[HasError] (which returns true if there was an error in the try) and GetLastValue[Value] (which returns the value tested in the try if there was no error).
  • Once the query worked out which input to use, the next step is to calculate the values in the grid for the next iteration. I decided that the easiest way to do this was to unpivot the resultset using the the new UnPivot() function; with all the data in a single column it was then relatively easy to declare the functions GetValue() and GetNeighbours() to apply the rules of Life, output the results in a new column in the Custom1 step.
  • The final problem to solve was that I needed to re-pivot the data to get it back into original table format. Although Data Explorer has an UnPivot() function it doesn’t have a Pivot() function; luckily, Peter Qian showed me how to do this on the forum in this thread using Table.Group() and my code is in the Custom2 step.

So despite the errors (and Data Explorer is still in beta, so some errors are only to be expected), many lessons learned. I hope you find this useful too.

Written by Chris Webb

April 27, 2013 at 9:44 pm

Cloud-Based DAX and MDX Formatter

with 4 comments

It’s been away for a while, but now it’s back – the site where you can enter your MDX or DAX code and have it formatted for you! Here’s the link:

http://formatmdx.azurewebsites.net/

image

Thanks to Nick Medveditskov, the man responsible for it.

Written by Chris Webb

April 17, 2013 at 11:06 pm

Posted in DAX, MDX

SSAS on Windows Azure Virtual Machines

with 6 comments

You may have already seen the announcement about Windows Azure Virtual Machines today; what isn’t immediately clear (thanks to Teo Lachev for the link) is that Analysis Services 2012 Multidimensional and Reporting Services are installed on the new SQL Server images. For more details, see:
http://msdn.microsoft.com/en-us/library/jj992719.aspx

SSAS 2012 Tabular is also supported but not initially installed.

Written by Chris Webb

April 16, 2013 at 9:13 pm

Applying a Function to Every Cell in a Table in Data Explorer (Power Query)

with 2 comments

NOTE: This post was written before Data Explorer was renamed as Power Query. All of the content is still relevant to Power Query.

Now that the PASS Business Analytics Conference is over, I can get back to playing around with Data Explorer and blogging about it. I’ve been working on a fun demo that I’ll try to blog about later this week, but in the course of creating this demo I came across a technique that I didn’t end up using but which I thought deserved a post on its own: how to apply a function to every cell in a table, rather than just every cell in a column.

For example, let’s imagine that you have a table that looks like this:

image

…and you want to add one to every single cell in the table, so you get:

image

It’s possible in the UI by creating lots of custom columns and then deleting the original columns, for sure, but I found a more elegant solution. Here’s the full DE code:

let

    Source = Excel.CurrentWorkbook(){[Name="Input"]}[Content],

    FunctionToApply = (x) =>; x + 1,

    GetColumnNames = Table.ColumnNames(Source),

    TransformList = List.Transform(GetColumnNames, each {_ , FunctionToApply}),

    Output = Table.TransformColumns(Source, TransformList)

in

    Output

Here’s what each step does:

  • Source: gets the data from the Excel table named Inputimage
  • FunctionToApply: defines a new function that takes a single parameter, x, and returns the value x+1. See this post for more details on using function in Data Explorer.image
  • GetColumnNames: returns a list object which contains the names of all of the columns in the table returned in the Source step.image
  • TransformList: this is the interesting step! It creates a new list based on GetColumnNames, but whereas GetColumnNames contains just one record per item in the list (the column name), this returns a list of lists, each with two items: the column name and a reference to the function FunctionToApply. It looks like this in the UI:imageIf you click on the first of the list links shown (and this isn’t part of the process, I’m only doing this to show what’s there) you see the following:

    image

  • Output: the list of lists created in the previous step can now be passed to the Table.TransformColumns() function to apply the function FunctionToApply() to every cell in every column in the table.image

I’m sure this is going to be useful to me at some point in the future… You can download the sample workbook here.

Written by Chris Webb

April 16, 2013 at 1:58 pm

PASS Business Analytics Conference Summary

with 3 comments

The PASS Business Analytics Conference in Chicago finished yesterday, and because I was there and because I did a fair amount of cheerleading for it (see here and here for example) I wanted to post a few thoughts on how it went.

I’ll be honest and say that I had a few worries before the event. Would anyone want to go? Would the sessions be a repeat of what get on the BI tracks at the PASS Summit and hundreds of other SQL Server conferences? In fact, everything went really, really well. Some of the sessions were quite sparsely attended (though this had nothing to do with the quality of the content – some of the best sessions didn’t get much of an audience) but overall there was a very respectable number of people (1200ish?). I had as many people in my session on OData as I’d get at any other large conference, and it was standing room only in at least one of Marco’s sessions. I also rather liked the fact that it was smaller than the Summit – it made it much easier to meet all the people I wanted to meet. If it carries on for a few years it could easily attract a much larger number of people.

Regarding the content I was particularly pleased because a lot of the topics I’d asked for turned up on the schedule. In fact one thing that struck me (and a few other people said the same thing to me as well) was that this was the first conference I’d been to in a long time where there were sessions that I really wanted to see in every time slot. My favourite session of the whole conference was Marc Smith on NodeXL; anyone that reads my blog knows I’ve been a big fan of NodeXL for a long time, but I learned a lot from this session because it concentrated on the basics of social network analysis rather than the tool itself. This was a prime example of the kind of topic that you simply wouldn’t get at a regular SQL Server conference – it was a business analytics session. Even the more technical presentations, such as the one on HPC Services for Excel, was outside the usual boundaries of SQL Server BI. Incidentally, I must get round to playing with HPC Services for Excel – you could use it to parallelise some DAX calculations, or even to batch process large numbers of PowerPivot models on desktop machines overnight…

So, in summary, the conference was a big success and I had a great time. I’ll definitely be going back next year. And did I mention that I got to meet Steven Levitt of Freakonomics fame?

Written by Chris Webb

April 14, 2013 at 2:59 am

Posted in Events, PASS

GeoFlow Public Preview Available

with 8 comments

First big news from the PASS BA Conference: the public preview for GeoFlow is now available. You can download it here:
http://www.microsoft.com/en-us/download/details.aspx?id=38395

Here are the official announcements with all the details:
http://blogs.technet.com/b/dataplatforminsider/archive/2013/04/11/day-2-pass-business-analytics-conference-new-3d-mapping-analytics-tool-for-excel.aspx
http://blogs.office.com/b/microsoft-excel/archive/2013/04/11/dallas-utilities-electricity-seasonal-use-simulation-with-geoflow-preview-and-powerview.aspx

GeoFlow is an addin for Excel 2013 that allows you to visualise your data on a 3D map, to zoom in and explore that data, and record ‘tours’ of this data. It’s a lot of fun! As a taster, here’s a screenshot of a visualisation showing English secondary schools exam results data (average A-Level point score per pupil) broken down by school gender of entry:

image

UPDATE: one other thing I have to mention is that when this was announced in the keynote at the PASS BA Conference this morning, Amir Netz did an absolutely astounding demo showing GeoFlow’s touch-based capabilities running on a massive Perceptive Pixel screen (I think it was this one: http://www.perceptivepixel.com/products/82-lcd-multi-touch-display). It was possibly the most impressive demo I’ve seen of any Microsoft BI product. Anyway, I got to play on it myself later and it was as cool as it looked. If you’ve got $80000 burning a hole in your pocket then you could do worse than invest in one of these babies.

Written by Chris Webb

April 11, 2013 at 2:07 pm

Posted in Excel, GeoFlow

SSAS Multidimensional and MDX Training in Australia

leave a comment »

This is just a quick post to mention that this summer I’ll be running two courses in Australia, one in Sydney on July 23-26, and one in Melbourne on July 29-August 1, in association with the folks at Wardy IT. The course content will cover all things Analysis Services Multidimensional: there’ll be one day on SSAS cube design, two days on MDX queries and calculations, and one day on SSAS performance tuning; you can see the full course outline here. It’s aimed at intermediate-to-advanced SSAS developers who want to deepen their knowledge and learn best practices. I’m really looking forward to it, and I hope to see you there!

Written by Chris Webb

April 8, 2013 at 1:40 am

Posted in Events, Technitrain

LightSwitch and Self-Service BI

with 6 comments

Visual Studio LightSwitch has been on my list of Things To Check Out When I Have Time for a while now; my upcoming session on the uses of OData feeds for BI at the PASS BA Conference (which will be a lot more exciting than it sounds – lots of cool demos – please come!) has forced me to sit down and take a proper look at it. I have to say I’ve been very impressed with it. It makes it very, very easy for people with limited coding skills like me to create data-driven line-of-business applications, the kind that are traditionally built with Access. Check out Beth Massi’s excellent series of blog posts for a good introduction to how it works.

How does LightSwitch relate to self-service BI though? The key thing here is that aside from its application-building functionality, LightSwitch 2012 automatically publishes all the data you pull into it as OData feeds; it also allows you to create parameterisable queries on that data, which are also automatically published as OData. Moreover, you can publish a LightSwitch app that does only this – it has no UI, it just acts as an OData service.

This is important for self-service BI in two ways:

  • First of all, when you’re a developer building an app and need to provide some kind of reporting functionality, letting your end users connect direct to the underlying database can cause all kinds of problems. For example, if you have application level security, this will be bypassed if all reporting is done from the underlying database; it makes much more sense for the reporting data to come from the app itself, and LightSwitch of course does this out of the box with its OData feeds. I came across a great post by Paul van Bladel the other day that sums up these arguments much better than I ever could, so I suggest you check it out.
  • Secondly, as a BI Pro setting up a self-service BI environment, you have to solve the problem of managing the supply of data to your end users. For example, you have a PowerPivot user that needs sales data aggregated to the day level, but only for the most recent week, plus a few other dimension tables to with it, but who can’t write the necessary SQL themselves. You could write the SQL for them but once that SQL is embedded in PowerPivot it becomes very difficult to maintain – you would want to keep as much of the complexity out of PowerPivot as possible.  You could set up something in the source database – maybe a series of views – that acts as a data supply layer for your end users. But what if you don’t have sufficient permissions on the source database to go in and create the objects you need? What if your source data isn’t actually in a database, but consists of other data feeds (not very likely today, I concede, but it might be in the future)? What if you’re leaving the project and need to set up a data supply layer that can be administered by some only-slightly-more-technical-than-the-rest power user? LightSwitch has an important role to play here too I think: it makes it very extremely easy to create feeds for specific reporting scenarios, and to apply security to those feeds, without any specialist database, .NET coding or SQL knowledge.

These are just thoughts at this stage – as I said, I’m going to do some demos of this in my session at the PASS BA Conference, and I’ll turn these demos into blog posts after that. I haven’t used LightSwitch as a data provisioning layer in the real world, and if I ever do I’m sure that will spur me into writing about it too. In the meantime, I’d be interested in hearing your feedback on this…

Written by Chris Webb

April 1, 2013 at 9:45 am

Posted in BI, PowerPivot

Follow

Get every new post delivered to your Inbox.

Join 2,868 other followers