Archive for May 2007
BIDSHelper beta released
Companion for MS Analysis Server
Microsoft BI Conference Podcasts
The Rumour Mill
An MDX Challenge: Debtor Days
with member measures.daysto10000 as
iif(
count(nonempty(null:{{[Date].[Date].currentmember} as mycurrentdate}.item(0),[Measures].[Internet Sales Amount]) as mynonemptydates)=0,
null,
iif(isempty([Measures].[Internet Sales Amount]) and (not isempty((measures.daysto10000, [Date].[Date].currentmember.prevmember)))
, (measures.daysto10000, [Date].[Date].currentmember.prevmember)+1,
iif(
count({{} as myresults,
generate(
mynonemptydates
, iif(count(myresults)=0,
iif(
sum(subset(mynonemptydates, count(mynonemptydates)mynonemptydates.currentordinal), [Measures].[Internet Sales Amount]) > 10000
, {mynonemptydates.item(count(mynonemptydates)mynonemptydates.currentordinal)} as myresults
, myresults)
, myresults
)
)
})=0, null,
count(myresults.item(0): mycurrentdate.item(0))1
)
)
)
select
descendants([Date].[Calendar].[Calendar Year].&[2004]
,
[Date].[Calendar].[Date]) on 0,
non empty
descendants(
[Customer].[Customer Geography].[StateProvince].&[HH]&[DE]
,
[Customer].[Customer Geography].[Postal Code])
on 1
from [Adventure Works]
where(measures.daysto10000)
On a cold cache this executes in a touch under 15 seconds on my laptop. The select statement puts all of the days in the year 2004 on columns, all of the postal codes in Hamburg on rows, and slices on the calculated measure defined in the with clause. Here are some things to notice about the calculated measure:
 The outermost iif simply says that if the set of dates from the start of the Date level to the current date contains no values at all for Internet Sales Amount, then return null. If there are values then the set of dates with values is stored in the named set mynonemptydates, declared inline.
 The next level of iif represents a recursive calculation, and I found that this was one of the extra touches that made a big difference to performance. It says that if the current date has no value for Internet Sales Amount but the value of the calculated measure is not null for the previous day, then simply add one to the value of the calculated measure from the previous day – this avoids a lot of extra work later on.
 The next level of iif is where I do the main part of the calculation and this is going to need a lot of explaining… Put simply, I’ve already got from step 1 a set of members representing the dates from the start of time to the current date which have values in and what I want to do is loop through that set from the end backwards doing a cumulative sum, stopping when that sum reaches 10000 and then taking the date I’ve stopped at and finding the number of days from that date to the current date. Originally I attempted the problem like this:
iif(
count(
tail(
filter(
mynonemptydates
, sum(subset(mynonemptydates, mynonemptydates.currentordinal1), [Measures].[Internet Sales Amount]) > 10000
)
,1) as myresults
)=0, null,
count(myresults.item(0): mycurrentdate.item(0))1Here I’m filtering the entire set to get the set of all dates where the sum from the current date to the end of the set is greater than 10000, then getting the last item in that set. This seemed inelegant though – if we had a large set then potentially we’d be doing the expensive sum a lot of times we didn’t need to do it. It seemed better to loop through the set backwards and then somehow be able to stop the loop when I’d reached the first member which fulfilled my filter criteria. But how was this going to be possible in MDX? I didn’t manage it completely, but I did work out a way of stopping doing the expensive calculation as soon as I’d found the member I was looking for. Let’s take a look at the specific section from the main query above:
iif(
count({{} as myresults,
generate(
mynonemptydates
, iif(count(myresults)=0,
iif(
sum(subset(mynonemptydates, count(mynonemptydates)mynonemptydates.currentordinal), [Measures].[Internet Sales Amount]) > 10000
, {mynonemptydates.item(count(mynonemptydates)mynonemptydates.currentordinal)} as myresults
, myresults)
, myresults
)
)
})=0, null,
count(myresults.item(0): mycurrentdate.item(0))1
)What I do first is declare an empty set inline called myresults. I then use the generate function to loop through the set nonemptydates. The first thing you’ll see after the generate is an iif checking if the count of myresults is 0, and the first time we run this check it will be so we need to do our cumulative sum. Because generate loops from the start of a set to the end, and I want to go in the other direction, I get the current ordinal of the iteration and then find the cumulative sum from the item that is that number of members away from the end of the set up to the end of the set. Once I’ve got the cumulative sum I can check if it is greater than 10000; if it is, then I return a set from the iif statement and at the same time overwrite the declaration of myresults with a set of the same name which now contains that one member. As a result, at all subsequent iterations the test count(myresults) returns 1 and I don’t try and do the cumulative sum again. I was quite pleased at finding I could do this – I hadn’t realised it was possible. It only makes about 0.5 seconds difference to the overall query performance though.

Finally, on the last line of the calculated measure I can take the member I’ve got in the set myresults and using the range operator find the number of days between it and the current date, which I’ve also stored in a named set called mycurrentdate.
Pretty fun, eh? No, please don’t answer that question. But if you can think of an alternative, betterperforming way of solving this problem I would love to hear it…
UPDATE: it turns out that Richard Tkachuk not only had a go at the same problem (although his interpretation of what it is is slightly different) but got just as excited about it as I did, and wrote up his findings here:
http://www.sqlserveranalysisservices.com/OLAPPapers/ReverseRunningSum.htm
Microsoft BI Conference Thoughts
Calculating SeasonallyAdjusted Values in MDX
My chalk talk yesterday at the BI Conference went…. ok. Unfortunately, despite having asked for projector so I could show my demo code it turned out the screen I got was pretty small so only the people in the front few rows could see anything. So I told everyone that I would put the code up on my blog so they could study the code in more detail and this is the first such post.
One of the points I wanted to make when discussing how to calculate trend values for KPIs in MDX is that we really need to think about the formulas we’re implementing to make sure they actually provide useful results. Unfortunately the business people we talk to often have even less idea than we do about what kind of calculations actually make sense: there’s a real need for input from someone with a statistical background, and I only wish I knew more about this subject. For example if your sales vary a lot by season (you might be selling ice cream, so sales are always going to be much higher in summer than in winter) there’s no point looking at previous period growth to determine whether your sales are growing or not because the strong seasonal component will mask the long term trend.
As I said, for this presentation I looked at a book I’ve had hanging around on my shelf for years which is basically a stats textbook for managers and MBA students: "Quantitative Methods for Decision Makers" by Mik Wisniewski. It explains some simple algorithms and techniques for analysing data that are very relevant for BI practitioners and I took the method for calculating a seasonallyadjusted total and translated it into an MDX query on Adventure Works as an illustration. Here it is:
with
–Calculate a 12month moving average from 6 months up to and including
–the current member to 6 months after
member measures.movingaverage as
avg(
[Date].[Calendar].currentmember.lag(5)
:
[Date].[Calendar].currentmember.lead(6)
, [Measures].[Internet Sales Amount])
–Centre the underlying trend, by taking the average of the moving average
–for the current member and the next member
member measures.trend as
avg(
{[Date].[Calendar].currentmember,
[Date].[Calendar].currentmember.nextmember}
, measures.movingaverage)
–Find the deviation by dividing Internet Sales by Trend
member measures.deviation as
[Measures].[Internet Sales Amount] / measures.trend
–Find the average deviation for any given month
member measures.averagemonthlydeviation as
avg(
exists(
[Date].[Calendar].[Month].members
, {[Date].[Month of Year].currentmember})
, measures.deviation)
–Find the sum of these average deviations for each month
–then divide by 12 to find out by how much each month’s average deviation
–needs to be adjusted by
member measures.adjustedcomponent as
(sum(head([Date].[Calendar].[Month].members,12), measures.averagemonthlydeviation)
/ 12)
–Adjust the monthly deviation by the value we’ve just found
member measures.adjusteddeviation as
measures.averagemonthlydeviation / measures.adjustedcomponent
–Finally find the seasonally adjusted Internet Sales Amount
member measures.seasonallyadjustedsales as
[Measures].[Internet Sales Amount] / measures.adjusteddeviation
–Run the query
select {[Measures].[Internet Sales Amount], measures.trend, measures.deviation
, measures.averagemonthlydeviation,measures.adjustedcomponent
, measures.adjusteddeviation, measures.seasonallyadjustedsales}
on 0,
[Date].[Calendar].[Month].members
*
[Date].[Month of Year].[Month of Year].members
on 1
from
[Adventure Works]
This works on the following theory. What we want to do is take the Internet Sales Amount measure and remove the seasonal variations so we can view the underlying trend. If we assume that there are three factors in play with our sales, the long term trend, seasonal variations and random factors (for example a health scare in July could kill our ice cream sales that month) then we can say that Sales = Trend * Seasonal Variations * Random Factors. The random factors we can’t do anything about, but taking this formula we can say that Trend = Sales / (Seasonal Variations * Random Factors). To get here, first we estimate the trend by doing a twelve month moving average which hopefully will smooth out those seasonal variations; we then find the deviation from the average for each month and then find the average deviation for each month (eg I can say then that January sales are on average 10% lower than usual, Julys are on average 25% higher than usual). For a whole year we’d like these deviations to cancel each other out and add up to 12; of course they don’t so we now need to find what they do add up to then divide by 12, then divide each month’s average by this value. We can then take this value and divide Internet Sales Amount by it to find the seasonallyadjusted value, the combination of the trend and the random factors.
Hopefully I haven’t screwed up in my interpretation of how it works; it certainly isn’t the most robust or optimal bit of MDX I’ve written either but it shows what you could do if you wanted.