Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Counting Customers Who Have Bought All Selected Products

leave a comment »

Market basket analysis is a fairly well understood problem for SSAS – it’s best handled using many-to-many relationships, as shown in the excellent The Many-to-Many Revolution  white paper. However the other week I was asked a question that was an interesting variation on this problem, and which shows up one of the limitations of the many-to-many approach.

Let me show you an example. The following PivotTable on the Adventure Works cube shows the number of distinct customers who bought products in the product category Bikes:

image

The Customer Count measure is a distinct count measure, the Country hierarchy from the Customer dimension is shown on rows, and the slicer here shows the Category hierarchy from the Product dimension. If you select Bikes and Clothing in the slicer, like so:

image

… the measure values are larger because what you’re seeing now is the number of distinct customers who bought products in either the category Bikes or the category Clothing.

But what if you want to see the number of customers who bought Bikes AND Clothing? Well, setting up a many-to-many relationship will give you this (and a lot more useful stuff too) but there are some limitations with this approach:

  • Adding the m2m relationship and the extra measure groups it needs increases processing time
  • You have to select your two product categories from two different (but identical) hierarchies
  • You can only select two product categories – if you want to add a third to your AND filter you need to add a third m2m relationship, and so on. It’s not very flexible.

It is possible to get around these issues with a pure MDX approach, and apply an AND filter that works with however many product categories that the user selects.

The key to understanding how this works is to see how you would write an MDX calculated member that does an AND filter on two product categories:

WITH

MEMBER MEASURES.[Bikes And Clothing] AS

COUNT(

EXISTING 

NONEMPTY(

NONEMPTY(

[Customer].[Customer].[Customer].MEMBERS,

([Measures].[Internet Sales Amount],[Product].[Category].&[3])),

([Measures].[Internet Sales Amount],[Product].[Category].&[1]))

)

SELECT

{MEASURES.[Bikes And Clothing]}

ON 0,

NON EMPTY

[Customer].[Country].[Country].MEMBERS

ON 1

FROM

[Adventure Works]

image

Here I’ve used two nested NonEmpty() functions to filter the set of all customers, first to get the ones that bought Clothing, then to filter these customers again to get the ones that bought Bikes; there’s also an EXISTING there to get only the ones in the current country.

Once again, to add more product categories to the AND filter you need to add more nested NonEmpty() functions… which means you need a way to dynamically generate the code, which of course you can do using StrToSet(). Now normally I avoid using StrToSet() inside MDX calculations because it can cause serious performance problems but in this case it’s the best choice.

The following named set and calculated measure show how to solve the problem for Adventure Works:

CREATE DYNAMIC SET [Selected Product Categories] as  

EXISTING [Product].[Category].[Category].MEMBERS;

 

CREATE MEMBER CURRENTCUBE.MEASURES.[Distinct Customers with all selected Product Categories] AS

IIF(

[Measures].[Internet Sales Amount]=0,

NULL,

COUNT(

EXISTING

STRTOSET(

GENERATE(

[Selected Product Categories],

"NONEMPTY(", "")

+ "[Customer].[Customer].[Customer].MEMBERS" +

GENERATE(

[Selected Product Categories],

", ([Measures].[Internet Sales Amount], " +

[Selected Product Categories].CURRENT.UNIQUENAME

+ "))", "")

)

)

);

The dynamic named set is there to help work out what is selected in the slicer in my Excel worksheet, as shown in Mosha’s old post here. The calculated measure is where all the interesting stuff happens: it uses the ability of the Generate() function to iterate over a set (in this case the dynamic named set), evaluate a string expression and concatenate these strings. The output of this is a series of nested NonEmpty()s, which then goes to StrToSet() to be evaluated, and then the contents of that set are counted.

image

This technique does not replace using a m2m relationship, because it will not allow you to show a query with product category on rows and columns and the number of customers who bought each combination shown. However, if all you want to do is show the number of customers who have bought one specific combination, this has a lot of advantages.

For all of your MDX fans out there, here’s another approach I came up with which doesn’t perform quite as well but is so much fun I had to share it:

CREATE MEMBER CURRENTCUBE.MEASURES.[Distinct Customers with all selected Product Categories V2] AS

IIF(

[Measures].[Internet Sales Amount]=0,

NULL,

COUNT(

UNION(

INTERSECT(EXISTING [Customer].[Customer].[Customer].MEMBERS AS MYCUSTOMERS, {}),

GENERATE(

[Selected Product Categories],

IIF(

[Selected Product Categories].CURRENT.ITEM(0) IS TAIL([Selected Product Categories]).ITEM(0),

NONEMPTY(MYCUSTOMERS, [Measures].[Internet Sales Amount]),

INTERSECT(NONEMPTY(MYCUSTOMERS, [Measures].[Internet Sales Amount]) AS MYCUSTOMERS, {})

)

)

)

)

);

I’m using Generate() here again, but this time I’m using it with inline named sets (which are even worse for performance than StrToSet) in a way similar to this classic post. The theory is the same though: it’s a way of calling NonEmpty() multiple times over a set with an arbitrary number of items in.

Here’s what the calculation does:

  • Inside the first Intersect() I’m declaring an inline named set called MYCUSTOMERS, then doing the Intersect() with an empty set to return… and empty set. But I’ve managed to declare my inline named set, which is the point here.
  • I’m then iterating over the same dynamic named set shown in my earlier example using Generate(), and:
    • For all but the last product category in that set, I’m doing the NonEmpty() on the contents of MYCUSTOMERS on the current product category and then overwriting the contents of MYCUSTOMERS with the output. However, for these iterations of Generate() I’m returning an empty set.
    • For the last product category in the set I’m actually returning the output of NonEmpty() over MYCUSTOMERS for the current product category. The reason I’m only returning something other than an empty set on the last iteration of Generate() is that Generate() returns the union of all the sets returned by each iteration, and that would give me the wrong results! I only want to output the set from the final iteration.

Written by Chris Webb

September 12, 2014 at 9:30 am

Posted in Analysis Services, MDX

Power Pivot / Power Query Read-Only Connection Problems In Excel 2013 – And What To Do About Them

with 5 comments

Anyone who has tried to do any serious work with Power Pivot and Power Query will know about this problem: you use Power Query to load some tables into the Data Model in Excel 2013; you make some changes in the Power Pivot window; you then go back to Power Query, make some changes there and you get the dreaded error

We couldn’t refresh the table ‘xyz’ from the connection ‘Power Query – xyz’. Here’s the error message we got:

COM Error: Microsoft.Mashup.OleDbProvider; The query ‘xyz’ or one of its inputs was modified in Power Query after this connection was added. Please disable and re-enable loading to the Data Model for this query..

image

This post has a solution for the same problem in Excel 2010, but it doesn’t work for Excel 2013 unfortunately. There is a lot of helpful information out there on the web about this issue if you look around, though, and that’s why I thought it would be useful to bring it all together into one blog post and also pass on some hints and tips about how to recover from this error if you get it. This is the single biggest source of frustration among the Power Query users I speak to; a fix for it is being worked on, and I hope it gets released soon.

Problem Description

Why does this problem occur? Let’s take a simple repro.

  1. Import the data from a table in SQL Server using Power Query. Load it into the Excel Data Model.
  2. Open the PowerPivot window in Excel, then create measures/calculated fields, calculated columns, relationships with other tables as usual.
  3. Go back to the worksheet and build a PivotTable from data in this table, using whatever measures or calculated columns you have created.
  4. Go back to the PowerPivot window and rename one of the columns there. The column name change will be reflected in the PivotTable and everything will continue to work.
  5. Re-open the Power Query query editor, and then rename any of the columns in the table (not necessarily the one you changed in the previous step). Close the query editor window and when the query refreshes, bang! you see the error above. The table in the Excel Data Model is unaffected, however, and your PivotTable continues to work – it’s just that now you can’t refresh the data any more…
  6. Do what the error message suggests and change the Load To option on the Power Query query, unchecking the option to load to the Data Model. When you do this, on the very latest build of Power Query, you’ll see a “Possible Data Loss” warning dialog telling you that you’ll lose any customisations you made. Click Continue, and the query will be disabled. The destination table will be deleted from your Excel Data Model and your PivotTable, while it will still show data, will be frozen.
  7. Change the Load To option on the query to load the data into the Excel Data Model again. When you do this, and refresh the data, the table will be recreated in the Excel Data Model. However, your measures, calculated columns and relationships will all be gone. What’s more, although your PivotTable will now work again, any measures or calculated columns you were using in it will also have gone.
  8. Swear loudly at your computer and add all the measures, calculated columns and relationships to your Data Model all over again.

So what exactly happened here? The important step is step 4. As Miguel Llopis of the Power Query team explains here and here, when you make certain changes to a table in the Power Pivot window the connection from your Power Query query to the Excel Data Model goes into ‘read-only’ mode. This then stops Power Query from making any subsequent changes to the structure of the table.

What changes put the connection to the Excel Data Model in ‘read-only’ mode?

Here’s a list of changes (taken from Miguel’s posts that I linked to above) that you can make in the PowerPivot window that put the connection from your query to the Data Model into ‘read-only’ mode:

  • Edit Table Properties
  • Column-level changes: Rename, Data type change, Delete
  • Table-level changes: Rename, Delete
  • Import more tables using Power Pivot Import Wizard
  • Upgrade existing workbook

How can you tell whether my connection is in ‘read-only’ mode?

To find out whether your connection is in ‘read-only’ mode, go to the Data tab in Excel and click on the Connections button. Then, in the Workbook Connections dialog you’ll see the connection from Power Query to the Data Model listed – it will be called something like ‘Power Query – Query1’ and the description will be ‘Connection to the Query1 query in the Data Model’. Select this connection and click on the Properties button. When the Connection Properties dialog opens, go to the Definition tab. If the connection is in read-only mode the properties will be greyed out, and you’ll see the message ‘Some properties cannot be changed because this connection was modified using the PowerPivot Add-In’. If you do see this message, you’re already in trouble!

image

How to avoid this problem

Avoiding this problem is pretty straightforward: if you’re using Power Query to load data into the Excel Data Model, don’t make any of the changes listed above in the PowerPivot window! Make them in Power Query instead.

How to recover from this problem

But what if your connection is already in ‘read-only’ mode? There is no magic solution, unfortunately, you are going to have to rebuild your model. However there are two things you can do to reduce the amount of pain you have to go through to recreate your model.

First, you can use the DISCOVER_CALC_DEPENDENCY DMV to list out all of your measure and calculated column definitions to a table in Excel. Here’s some more information about the DMV:

http://cwebbbi.wordpress.com/2011/09/17/documenting-dependencies-between-dax-calculations/

To use this, all you need to do is to create a DAX query table in the way Kasper shows at the end of this post, and use the query:

select * from $system.discover_calc_dependency

Secondly, before you disable and re-enable your Power Query query (as in step 6 above), install the OLAP PivotTable Extensions add-in (if you don’t already have it) and use its option to disable auto-refresh on all of your PivotTables, as described here:

http://olappivottableextend.codeplex.com/wikipage?title=Disable%20Auto%20Refresh&referringTitle=Home

http://www.artisconsulting.com/blogs/greggalloway/Lists/Posts/Post.aspx?ID=26

Doing this prevents the PivotTables from auto-refreshing when the table is deleted from the Data Model when you disable the Power Query query. This means that they remember all of their references to your measures and calculated columns, so when you have recreated them in your Data Model (assuming that all of the names are still the same) and you re-enable auto-refresh the PivotTables will not have changed at all and will continue to work as before.

[After writing this post, I realised that Barbara Raney covered pretty much the same material in this post: http://www.girlswithpowertools.com/2014/06/power-query-refresh-fails/ . I probably read that post when it was published and then forgot about it. I usually don't blog about things that other people have already blogged about, but since I'd already done the hard work and the tip on using OLAP PivotTable Extensions is new, I thought I'd post anyway. Apologies...]

Written by Chris Webb

September 8, 2014 at 9:30 am

Create Your Own Relationships Between Tables In The Excel Data Model With Power Query

with 8 comments

You probably know that, when you are importing data from multiple tables in SQL Server into the Excel Data Model in Excel 2013 using Power Query, Power Query will automatically create relationships between those tables in the Data Model. But did you know that you can get Power Query to do this for other data sources too?

Now wait – don’t get excited. I’ve known about this for a while but not blogged about it because I don’t think it works all that well. You have to follow some very precise steps to make it happen and even then there are some problems. That said, I think we’re stuck with the current behaviour (at least for the time being) so I thought I might as well document it.

Consider the following Excel worksheet with two tables in it, called Dimension and Fact:

image

If you were to load these two tables into the Excel Data Model, you would probably want to create a relationship between the two tables based on the FruitID column. Here are the steps to use Power Query to create the relationship automatically:

  1. Click inside the Dimension table and then, on the Power Query tab in the Excel ribbon, click the From Table button to create a new query.
  2. When the Query Editor window opens, right click on the FruitID column and select Remove Duplicates.
    image
    Why are we doing this when there clearly aren’t any duplicate values in this column? The new step contains the expression
    Table.Distinct(Source, {"FruitID"})
    …and one of the side-effects of using Table.Distinct() is that it adds a primary key to the table. Yes, tables in Power Query can have primary keys – the Table.AddKey() function is another way of doing this. There’s a bit more information on this subject in my Power Query book, which I hope you have all bought!
  3. Click the Close & Load to.. button to close the Query Editor, and then choose the Only Create Connection option to make sure the output of the query is not loaded anywhere and the query is disabled, then click the Load button. (Am I the only person that doesn’t like this new dialog? I thought the old checkboxes were much simpler, although I do appreciate the new flexibility on where to put your Excel table output)
    image
  4. Click inside the Fact table in the worksheet, click the From Table button again and this time do load it into the Data Model.
  5. Next, in the Power Query tab in the Excel ribbon, click the Merge button. In the Merge dialog select Dimension as the first table, Fact as the second, and in both select the FruitID column to join on.
    image
  6. Click OK and the Query Editor window opens again. Click the Close & Load to.. button again, and load this new table into the Data Model.
  7. Open the Power Pivot window and you will see that not only have your two tables been loaded into the Data Model, but a relationship has been created between the two:
    image

What are the problems I talked about then? Well, for a start, if you don’t follow these instructions exactly then you won’t get the relationship created – it is much harder than I would like. There may be other ways to make sure the relationships are created but I haven’t found them yet (if you do know of an easier way, please leave a comment!). Secondly if you delete the two tables from the Data Model and delete the two Power Query queries, and then follow these steps again, you will find the relationship is not created. That can’t be right. Thirdly, I don’t like having to create a third query with the Merge, and would prefer it if I could just create two queries and define the relationship somewhere separately. With all of these issues I don’t think there’s any practical use for this functionality right now.

I guess the reason I think the ability to create relationships automatically is so important is because the one thing that the Excel Data Model/Power Pivot/SSAS Tabular sorely lacks is a simple way to script the structure of a model. Could Power Query and M one day be the modelling language that Marco asks for here? To be fair to the Power Query team this is not and should not be their core focus right now: Power Query is all about data acquisition, and this is data modelling. If this problem was solved properly it would take a lot of thought and a lot of effort. I would love to see it solved one day though.

You can download the sample workbook for this post here.

Written by Chris Webb

September 2, 2014 at 10:06 am

This Is My 1000th Blog Post

with 22 comments

Just a few months away from the tenth anniversary of my first post here, I’ve reached the milestone that is my 1000th blog post. If you’ve been with me since back then, thanks for reading! I have no idea how I managed to write so much – it’s an average of around two posts per week, which I certainly haven’t managed recently – but I suspect that the answer lies in the fact that I posted a lot of rubbish here in the early years that I’m embarrassed by now.

I can remember the day when I decided to start this blog quite well. It was just after Christmas so the office was quiet and I didn’t have much work to do; blogging was the cool new thing back in late 2004 and having discovered that Mosha had started a blog I thought it was something I should be doing too, so as not to be left behind. Microsoft had just launched its own blogging platform so I signed myself up. I didn’t think I would stick at it this long…

At first I thought I would just use it writing up solutions to common Analysis Services and MDX problems, so that I didn’t have to keep repeating myself when I was answering questions on the microsoft.public.sqlserver.olap newsgroup. I kept going, though, for a lot of other reasons:

  • To remember what I’ve learned. If I didn’t write this stuff down I would forget it, and trust me, I’m always googling for old posts here. This also explains why there is very little overall structure or purpose to what I write about. Technical books need to cover a topic very methodically: start at the basics, explain all the concepts and functionality, not miss anything out, and so on. Here, if I learn something interesting and useful while at work, or helping someone on a forum, or while playing around with a new tool, I just need to write that one thing down and not worry about whether it fits into some greater plan.
  • I also find that the act of writing up a problem or topic for a post helps me understand it better. To be able to explain a technical concept you first have to be sure you understand it properly yourself, and writing for other people forces you to do that.
  • To pass on Microsoft BI-related news. I work with these tools every day and so it’s natural that I want to find out what new toys I’ll have to play with in the future. I find this stuff interesting and fun, and it seems like there are several thousand other people around the world who also want to know what’s going on (even if we might not want to admit this publicly). I like airing my opinions too: sometimes Microsoft does things I agree with, sometimes it does things I think are crazy, and since my career and business is wholly dependent on Microsoft BI I think the occasional bit of public feedback is healthy and allowable. Brent Ozar sums up my feelings on this subject perfectly here. I’ve got in trouble once or twice for things I’ve written, but I’ve never regretted writing any of my posts.
  • It’s marketing for my consultancy and training. I have to make a living somehow, and if I didn’t blog then it would be much harder to find customers – I think my blog is much more valuable in this respect than writing books or speaking at conferences or user groups. I don’t want to sound cynical, though, and I don’t see this blog as something that is purely commercial. I love to share and it just so happens that sharing my knowledge is also good for business. Some two years after starting this blog, just after I resigned from my permie job to become a self-employed consultant, one of my soon-to-be ex-colleagues said to me “You know, you’ll have to stop blogging now: why would anyone hire you if they can read everything you know on your blog for free?”. I didn’t have a good answer for him at the time but I soon found that if someone finds the answer to a problem on my blog, they are much more likely to think about hiring me when they have a problem they can’t solve. What’s more, I firmly believe that the way that people in the SQL Server community share knowledge publicly, even when they are aware that this knowledge could be used by their competitors, means that the community as a whole is stronger, SQL Server is more successful, and we all benefit more commercially than if we had not shared in the first place.
  • I enjoy writing so I’m quite happy to spend my spare time writing blog posts. There’s no way I could have forced myself to write a thousand posts if I didn’t enjoy doing it. I also travel a lot for work, so that results in a lot of time spent in airports and hotel rooms with nothing better to do. To make another comparison with writing tech books: a tech book has to be objective, impartial, polished, structured, sober and impersonal, whereas a blog is (or at least in my opinion should be) personal, subjective, haphazard, rough-edged and sometimes controversial. This makes blogging less of an effort and more of a pleasure.
  • Finally, I admit it, I get a kick out of knowing that when I write something there are people out there who want to read it.

Will I make it to my 2000th post? I have no idea, but I probably will if Microsoft are still making BI tools and I’m still using them.

Written by Chris Webb

August 31, 2014 at 11:45 pm

Posted in Random Thoughts

Power Query Functions That Return Functions

with 3 comments

You’re probably aware that, in Power Query, a query can return a function. So for example here’s a very simple query (so simple that no let statement is needed) called MultiplyTwoNumbers with the following definition:

(x as number, y as number) => x * y

It can be used on the following table in Excel:

…to multiply the numbers in the column called Number by two and show the result in a custom column like so:

let

Source = Excel.CurrentWorkbook(){[Name="Data"]}[Content],

#”Inserted Custom” = Table.AddColumn(Source, “Custom”, each MultiplyTwoNumbers(2, [Number]))

in

#”Inserted Custom”

Here’s the output:

It’s also the case that a function can return another function. Consider the following query, called MultiplyV2:

let

EnterX = (x as number) =>

let

EnterY = (y as number) => x * y

in

EnterY

in

EnterX

It is a function that takes a single parameter, x, and it returns a function that takes a single parameter, y. The function that is returned multiplies the value of x by the value of y. Here’s an example of how it can be used on the table shown above:

let

//Return a function that multiplies by 2

MultiplyBy2 = MultiplyV2(2),

//Load data from the table

Source = Excel.CurrentWorkbook(){[Name="Data"]}[Content],

//Use the MultiplyBy2 function in a custom column

#”Inserted Custom” = Table.AddColumn(Source, “Custom”, each MultiplyBy2([Number]))

in

#”Inserted Custom”

This gives exactly the same result as before:

In this query, the MultiplyBy2 step calls the MultiplyV2 function with the argument 2, and this returns a function that multiplies the values passed to it by 2. This function can then be called in the final step where the custom column is added to the table using the expression MultiplyBy2([Number])

Interesting, isn’t it? I hope this satisfies your curiosity Marco J

You can download the sample workbook for this post here.

 

 

Written by Chris Webb

August 24, 2014 at 10:09 pm

Posted in Uncategorized

Removing Punctuation From Text In Power Query

with 6 comments

In one of my first posts on Power Query (and still my favourite) I found the top 100 words in the complete works of Shakespeare. As always when you’re learning a new tool, though, I look back at what I wrote then and realise there are better ways of doing things… one of which is removing all of the punctuation from a piece of text.

In the post I’m talking about I used the following expression to remove punctuation:

Text.Remove(RemoveLineBreaks,{"," ,"." ,"?" ,";" ,":" ,";" ,"’","@" ,"#" ,"~" ,"{" ,"[" ,"}" ,"]" ,"(" ,")", "*"})

However, as you can see, it only removes the punctuation could be bothered to hard-code in the list – luckily Shakespeare didn’t use too much exotic punctuation! There is a better way of doing this, though, and it relies on the Character.FromNumber() function which takes a number and returns the equivalent Unicode character; for example, the expression Character.FromNumber(38) returns an ampersand & symbol.

Looking at the list of Unicode characters here it’s easy to identify the ranges of numbers that represent punctuation characters. These ranges can then be represented in M as lists. Remembering that lists of ranges of numbers can be declared easily by using the notation {1..5}, which results in the list of numbers {1,2,3,4,5}, you can write a list of lists containing the numbers representing the Latin punctuation characters in Unicode like so:

{{0..31},{33..47},{58..64},{91..96},{123..191}}

and you can use List.Combine() to turn this list of lists into a single list of numbers.  Finally, you can use List.Transform() and Character.FromNumber() to get a list of the Unicode characters for these numbers, and pass that list to Text.Remove(). Here’s a query showing everything:

let

    //get a list of lists containing the numbers of Unicode punctuation characters

    numberlists = {{0..31},{33..47},{58..64},{91..96},{123..191}},

    //turn this into a single list

    combinedlist = List.Combine(numberlists),

    //get a list of all the punctuation characters that these numbers represent

    punctuationlist = List.Transform(combinedlist, each Character.FromNumber(_)),

    //some text to test this on

    inputtext = "Hello! My name is Chris, and I'm hoping that this *cool* post will help you!",

    //the text with punctuation removed

    outputtext = Text.Remove(inputtext, punctuationlist)

in

    outputtext

 

The output of this query is this:

image

You can download the sample workbook for this post here.

Written by Chris Webb

August 18, 2014 at 11:20 am

Posted in Power Query

Comparers, Combiners, Replacers and Splitters in Power Query

with 8 comments

At the end of the Power Query Formula Library Specification (which can be downloaded here) are sections on Comparer, Combiner, Replacer and Splitter functions. These functions are most often used in conjunction with other functions like Table.CombineColumns() and Table.SplitColumn, but what you may not realise from the documentation (which also has a few minor but nonetheless confusing bugs in it) is what these functions do: they are functions that return functions, and the functions that they return can be used independently just like any other function.

Take Splitter.SplitTextByDelimiter() as an example. It returns a function that splits a piece of text by a delimiter, and returns a list containing the resulting pieces. The following M code calls this function to return a function that splits comma delimited text:

let

    demo = Splitter.SplitTextByDelimiter(",")

in

    demo

 

As noted here, once you have a query that returns a function you can see that function’s signature and invoke it from the Query Editor window. Here’s what the query above shows in the Query Editor window:

image

If you click the Invoke button and enter the text

one,two,three,four

As follows:

image

So that the code for the query becomes:

let

    demo = Splitter.SplitTextByDelimiter(","),

    Invokeddemo = demo("one,two,three,four")

in

    Invokeddemo

What is returned is the list {“one”, “two”, “three”, “four”} which looks like this in the Query Editor window:

image

There are various other Splitter functions that can be used to return functions that split text in different ways. Similarly, the Combiner functions return functions that can be used to combine a list of text into a single piece of text. For example:

let

    demo = Combiner.CombineTextByDelimiter("--"),

    Invokeddemo = demo({"one","two","three","four"})

in

    Invokeddemo

Returns the text

one–two–three—four

The Replacer functions return functions for replacing values in text , while the Comparer functions return functions that can be used for comparing text using specific cultures and case sensitivities.

Written by Chris Webb

August 11, 2014 at 9:30 am

Posted in Power Query

Follow

Get every new post delivered to your Inbox.

Join 3,190 other followers