Share this:

Like this:

It’s been all over Twitter today and Kasper has already blogged about it, but I thought this was worth a blog post from me all the same: the SSAS dev team are looking for feedback on features for the next version of SSAS and have put together a survey here:

It’s not a list of features that will definitely be delivered, and doesn’t cover everything they’re thinking about, but it’s a way to help them prioritise some features over others and it should take no more than 20 minutes to complete, so why not help them out?

Share this:

Like this:

I’ve been doing some performance tuning work on SSAS Multidimensional recently that has forced me to look at some behaviour I’ve observed several times but never properly understood: what happens with Storage Engine caching when you are querying multiple measures in the same measure group. Here are some of my findings (thanks, as always, to Akshai and Marius for answering my questions on this) although this post only deals with a few basic scenarios…

Consider the following, quite basic cube built from Adventure Works. It has one measure group and two measures, Sales Amount and Tax Amount, that both have AggregateFunction Sum:

And a single Date dimension with the following attribute relationships:

If I run a Profiler trace, clear the cache and run the following query twice:

Even though this is the first time I’ve queried for this measure since the cache was cleared, this query still hits the cache. This is because when you query for one measure, the SSAS Storage Engine will retrieve data for all other measures in the same measure group for the granularity of data requested.

This means that the AggregateFunction property of a measure is significant here. If I add a new measure to the cube with AggregateFunction set to Count instead of Sum:

I see the same thing happening, ie queries that request data for Sales Amount or Tax Amount also warm the SE cache with values for Internet Sales Count. This is because a query for Internet Sales Count can be answered with data of the same granularity as a query for Sales Amount. However, if I add a new measure called Last Sales Amount with AggregateFunction Last Non-Empty:

And then clear the cache, and run the two following queries one after the other:

I can see that the first query does not warm the cache for the second query – both queries go to disk:

Why is this happening? Why isn’t the cache being used? A clue lies in the Query Subcube Verbose event for both queries. For the first query, using Sales Amount, the following granularity of data is being requested:

Dimension 0 [Date] (0 0 0 *) [Date]:0 [Month]:0 [Quarter]:0 [Year]:*

Whereas the second query, using Last Sales Amount, requests this granularity:

Dimension 0 [Date] (* 0 0 *) [Date]:* [Month]:0 [Quarter]:0 [Year]:*

Both queries have Years on rows, but because Last Sales Amount is semi-additive the values returned are actually from the Year and Date granularity. So, when the semi-additive measure is requested in the second query the data needed for it is not in the Storage Engine cache: the first query requested data at the Year granularity only.

From what I understand, the logic governing this behaviour is very complex and the exact query plan that gets generated will depend on the overall design of your cube, the AggregateFunction used for the measures in each measure group (measures with measure expressions are going to work in a similar way to semi-additive measures) and the queries you’re running. However it is useful to be aware of this kind of behaviour when designing and tuning SSAS cubes. For example, it could be that if you have a large number of measures (tens or even a hundred) in the same measure group it could be worth splitting them out into separate measure groups to improve performance, especially if some measures are never queried together – you would need to test this thoroughly first though. This behaviour would also be relevant in cases where you’re designing aggregations manually.

Share this:

Like this:

Yesterday, the SQLBits Committee (which I’m a member of) announced the dates for SQLBits XI: it will be taking place on May 2nd-4th 2013 at the East Midlands Conference Centre in Nottingham, UK. SQLBits is, of course, Europe’s biggest SQL Server conference and the second-biggest dedicated SQL Server conference in the world, and we attract attendees from all over the world. Apart from top-notch sessions from the world’s leading SQL Server experts you can also expect to have a lot of fun: at previous events attendees have had the chance to groove with the Beatles (well, ok, maybe they were just pretending to be the Beatles), play darts with professionals and hang out with Steve Wozniak (the real one). You should come! To find out more, keep an eye on http://sqlbits.com/

Share this:

Like this:

I’ve got a large backlog of serious technical blog posts to write but today, since I’m still recovering from my trip to the PASS Summit in Seattle last week, I couldn’t resist going back to my favourite data visualisation tool NodeXL and having some fun with it instead. Anyone that saw the keynotes last week will know that the future of BI is all about analysing data from Twitter – forget about that dull old sales or financial data you used to use on your BI project – and so, inspired by Sam Vanga’s blog post from today on that same topic I decided to take a look at some Twitter data myself.

In NodeXL I imported 1757 tweets from 515 different people that included the #sqlpass hashtag from the 8th of November when Twitter activity at the conference was at its peak (I couldn’t import any more than that – I assume Twitter imposes a limit on the number of search results it returns). In basic terms, when NodeXL imports data from Twitter each Twitter handle becomes a point on a graph, and a line is drawn between two Twitter handles when they appear in a tweet together. I won’t bother going into any detail about how I built my graph because analysing the results is much more interesting, so I’ll just say that after playing around with the clustering, layout and grouping options here’s what I came up with:

It looks very pretty from this distance but it’s not very useful if you can’t read the names, so I saved a much larger .png version of this image here for you to download and explore, and if you’ve got NodeXL you can download the original workbook here (don’t bother trying to open it in the Excel Web App). It’s fascinating to look at – even though the data comes from a very restricted time period the cliques in the SQL Server world emerge quite clearly. For example, here’s the group that the clustering algorithm has put me in (I’m @Technitrain), which is at the bottom of the graph on the left-hand side:

There’s a very strong UK/SQLBits presence there (@timk_adatis and @allansqlis for example), but also a strong BI presence as well with @marcorus and @markgstacey, which is pretty much what you’d expect. There are several other small groups like this, plus a large number of unconnected people in groups on their own in the bottom right-hand corner of the graph, but on the top left-hand side there’s a monster group containing a lot of well-known SQL Server personalities. Jen Stirrup (@jenstirrup) is right in the centre of it, partly because she’s one of the SQL Server Twitter royalty and partly because of her well-deserved PASSion award that day. Highlighting in red just the tweets that involved her shows at the very highest level how well-connected she is:

Keeping Jen selected and zooming in shows the people clustered together with Jen a bit better:

Selecting not only Jen’s tweets but also the tweets of the people who tweeted to her and also to each other (which is one of many useful features in NodeXL), highlights just how close the members of this group are:

This is clearly where the popular kids hang out…

Anyway, I hope this gives you an idea of the kind of thing that’s possible with NodeXL and Twitter data and inspires you to go and try it yourself. Hell, NodeXL is so much fun it might prove to the DBA crowd that BI doesn’t need to be boring!

Share this:

Like this:

For my last post from the PASS Summit, I thought I’d mention briefly some of the products that caught my eye as I wandered round the exhibition hall this afternoon:

OData Connectors from RSSBus (http://www.rssbus.com/odata/), a series of web apps that expose OData feeds (which then of course can be consumed in PowerPivot and SSAS Tabular) from a variety of data sources including Quickbooks, Twitter and MS CRM. I’d seen the website a month or so ago, actually, but I found out today they are close to releasing OData connectors for Google, Google Docs, Facebook, Email and PowerShell as well, which open up some intriguing possibilities for PowerPivot analysis. I can imagine doing a really cool demo where I set up an email address, got the audience to email me, then hooked PowerPivot up to my inbox and analysed the emails as they came in!

XLCubed (http://www.xlcubed.com/) – well, ok, they aren’t exactly new to me but it was good to have a chat with the guys on the stand. It’s worth pointing out they have a good mobile BI story for SSAS users.

Kepion (http://www.kepion.com/) – I was quite impressed with the demos I saw of their products for building SSAS-based BI solutions, especially for (but not restricted to) financial planning; it looked pretty slick.

Predixion (http://www.predixionsoftware.com/predixion/) – again, the company itself isn’t new to me but I got a demo of their new product, Predixion Enterprise Insight Developer Edition, which I’d been meaning to check out for a while. This is an immensely powerful free tool for doing data mining in Excel and it’s very closely integrated with PowerPivot too. Even if you don’t want to do complex stuff, it has some features that would be useful for regular PowerPivot users such as the ability to select a column in a PowerPivot table, analyse the data in it and then generate bandings which are then persisted in a new calculated column.

Post navigation

Follow Blog via Email

Social

Need some help?

As well as being a blogger, I'm an independent consultant specialising in Analysis Services, MDX, DAX, Power BI, Power Query and Power Pivot. I work with customers from all round the world solving design problems, performance tuning queries and delivering training courses, and I am happy to work on short-term engagements. For more details see http://www.crossjoin.co.uk