Adatis BI Blogs

Once you have users accessing your cube it’s almost inevitable at some point that someone will ask you to generate usage statistics from it, and there are a number of methods to achieve this. In this quick blog post, I’ll detail them and my experiences with each, and then use this data to create a PBI report. Native Tabular Properties The first method is natively through the tabular cube properties. This also has the added bonus (read impact) that it will optimise future aggregations based on usage – in fact that’s its main purpose. This can be done by setting the CreateQueryLogTable to true, setting up the QueryLogConnectionString (to point to the DB where the usage table requires hosting), setting the QueryLogSamping rate (10 means every 10th query will be logged), and finally the name of the QueryLog table. Advantages of this method is that its very easy to setup with limited knowledge required and it could potentially improve performance if you have an environment where users submit repetitive queries. Unfortunately there are also a number of disadvantages which led me to find other methods. Firstly, it creates a degree of overhead on the cube if its sampling too often; we actually had visible performance related complaints once we turned it on – either through the sampling or change to the “optimised” aggregations. Depending on the sampling rate, you could also find that users who rarely use the cube are not picked up as part of the stats. As well as this any changes to the cube structure will cause the logging table to be reset. The table is also limited in terms of what it actually logs (as you can see below) – useful if you just want just the user and timestamp info but not much else, and no real ability to configure. AS Trace To that extent, I looked for other tools to do the same task but better and I found AS Trace. Originally built for SQL Server 2012, it works fine on 2014 – and provides you the ability to run a trace against the cube activities (and log to a table) exactly like the SQL profiler but without the overhead of the GUI which adds unnecessary memory/processor power. It also runs as a windows service allowing it to restart automatically when the server reboots. If this is the case, the tool also logs the existing data to a History table and truncates the logging table. Exactly what I was after. The tool collects information based on a preconfigured Analysis Services Profiler template, which can be optimised depending on which events you are interested in. I initially ran it using most events selected, and with a limited user set it was generating in the region of 25,000 rows a day. This was clearly not maintainable for a long period of time. I then used the following blog post to understand what each event of the profiler was giving me and then just created a lightweight trace definition file to give me what I wanted. I limited it to Query Begin, Query End (for DAX/MDX statements) and Audit Logon/Logout (for session data). The setup is very straight forward, just run the install.bat as an escalated privileged account, and check it installs the service correctly. Next, add your SSAS service account to the Logon of the service, make sure the account has “Log on as Service” and membership to the database you are writing to in the form of DDL and DML access, i.e. able to create tables, write to tables – and lastly admin rights to the instance of SSAS you intend to use. Next, configure the ASTrace.exe.config file with the parameters you want the tool to use. This includes the location of the cube (can handle multiple cubes), the location of the trace definition file, the location of the DB instance and table you want to log to and lastly whether you want to preserve history on restart. The only thing I couldn’t do here, is set the schema of the table it was using to log to, which defaults to dbo. All that’s left is to start the service, and check the log file to see if it has created any errors on start-up. If not, the table should be created correctly and awaiting input. I also saw another method while researching using Extended Events (XEvents) but did not implement this once AS Trace provided me with the information I needed. View / Power BI Report I initially used the data to run a limited set of queries to extract total users, and total queries for a given time period. This was useful to a degree but from the data collected I realised I could be doing so much more. This lead me to do some analysis across the type of metrics being logged, and allowed me to create a view on top of the tables of what I thought might be useful on a report. I removed all the redundant columns it was tracking, and created some friendly names for the EventSubclass, and other columns. I used the PATINDEX function to check the query statement for existence of some important values – while not an exact science, it would give me a good picture of the split between certain user groups and KPIs being run. I’ve included the view definition below. I ended up limiting the data to EventClass 10 as this seemed to capture all the necessary data. The only downside I have seen so far is that users querying through the Power BI web service are anonymised under the service account name. I’m currently looking into options to resolve this which I’ve seen as configuration options on Power BI – to allow through the username as long as it can be matched at the other end. SELECT
RowNumber AS ID,
SPID AS SessionID,
CurrentTime AS DateQueried,
NTUserName AS Username,
CASE EventSubClass
WHEN 0 THEN 'MDX Query (Excel)'
WHEN 3 THEN 'DAX Query (Power BI)'
WHEN 1 THEN 'METADATA Query'
END AS QueryType,
CASE Success WHEN 1 THEN 'Successful Query' ELSE 'Query Error' END AS SuccessfulQuery,
CONVERT(DECIMAL(10,2),CONVERT(DECIMAL(18,3),CPUTime)/1000) AS CPUTimeSec,
CONVERT(DECIMAL(10,2),CONVERT(DECIMAL(18,3),Duration)/1000) AS DurationSec,
TextData AS Query,
CASE PATINDEX('%Mexico%',TextData) WHEN 0 THEN 0 ELSE 1 END AS MexicoMarket,
CASE PATINDEX('%Colombia%',TextData) WHEN 0 THEN 0 ELSE 1 END AS ColombiaMarket,
CASE PATINDEX('%CS4%',TextData) WHEN 0 THEN 0 ELSE 1 END AS CS4,
ServerName
FROM
[dbo].[ASTraceTable]
WHERE
EventClass = 10
Once I had the view, creating the report was relatively straight forward, and can be seen below.
I included metrics for number of queries by user (blurred out) which also doubled as a filter, the % split of queries for things such as Excel/Power BI, a measure of queries by timeframe, a logarithmic scaled display for queries by query duration, and lastly a split of queries by KPI. I intend to tweak these once I receive more data from the trace, but was relatively happy with the information that they were providing.
Please let me know if you have any comments.

I've recently been working on a project which required KPI level security alongside the traditional row level security secured at a geography level. This would limit what financial data a user could see within a cube, without having to create multiple cubes or use perspectives (which would not actually secure the data). To achieve this, I needed to populate a set of 'KPI User/Role' tables stored in Master Data Services (MDS) with a list of users who were stored in a particular AD group. I would need these tables updated on a regularly basis to grant/revoke access. We could then use these names along with the USERNAME() function in DAX to filter. The Solution One method to solve my problem would be by using SSIS. The package could be setup to run as part of a SQL Agent Job, either by a schedule or on demand. My list of users were stored in an AD group called LH_FIN. To start with you will need to truncate and clear your MDS staging tables that you are about to populate. You can then use the data flow to process the majority of the logic, by creating a script component task. The purpose of this is to loop through Active Directory and pick up the user details that belong to the specified AD Group or set of AD groups if dealing with multiple roles. A number of variables are defined which the task uses to complete the lookup. strLDAP – the LDAP directory on which to perform the lookup strDomain – the domain on which the AD group(s) belong strADPrefix – the AD group prefix from which to return user information about strADParent – the parent group which contains the AD groups which you are looking up (may not need to be used if only looking up a single AD group) To extract users from multiple groups, make sure the prefix stored in the variable strADPrefix covers both groups. Once the rows are extracted it would then be a case of using SSIS to split the data accordingly on the AD Group Name. The following code can be used in the script: The first section sets up the objects required to interrogate the directory, and the fields we expect to return from the accounts – the most important of which is memberof which is used to check versus our AD Prefix. It also filters out items such as service accounts and disabled accounts to speed up the interrogation process. Public Overrides Sub CreateNewOutputRows()
Dim domain As String = Variables.strDomain
Dim searchRoot As New DirectoryEntry(Variables.strLDAP, Nothing,
Nothing, AuthenticationTypes.Secure)
Dim dirSearch As New DirectorySearcher(searchRoot)
dirSearch.SearchScope = SearchScope.Subtree
'LogonName, GroupsUserBelongsTo, Department, JobTitle, MailAddress, DisplayName
dirSearch.PropertiesToLoad.Add("samaccountname")
dirSearch.PropertiesToLoad.Add("memberof")
dirSearch.PropertiesToLoad.Add("department")
dirSearch.PropertiesToLoad.Add("title")
dirSearch.PropertiesToLoad.Add("mail")
dirSearch.PropertiesToLoad.Add("displayname")
'filter to user objects
dirSearch.Filter = "(objectCategory=person)"
'filter to user objects
dirSearch.Filter = "(objectClass=user)"
'filter out disabled accounts
dirSearch.Filter = "(!userAccountControl:1.2.840.113556.1.4.803:=2)"
'filter out password never expires accounts, i.e. service accounts
dirSearch.Filter = "(!userAccountControl:1.2.840.113556.1.4.803:=65536)"
'sets chunk size for retrieving items
dirSearch.PageSize = 1000
The next section of code performs the search, and for any LDAP objects it finds within the filter set, returns the properties requested. These properties are then stored in key/value pairs.
Dim props As ResultPropertyCollection
Dim values As ResultPropertyValueCollection
Dim key As String
Dim userAccountName As String
Dim departmentHome As String
Dim jobtitle As String
Dim GroupName As String
Dim email As String
Dim displayName As String
Dim groups As New ArrayList
Using searchRoot
'Return all LDAP objects, LDAP://acl/CN=Tristan Robinson,OU=Employees,DC=ACL,DC=local
'CN = Common Name, OU = Organisational Unit, DC = Domain Component
Using results As SearchResultCollection = dirSearch.FindAll()
For Each result As SearchResult In results
'For each object return properties, i.e. displayname, memberof, etc
props = result.Properties
For Each entry As DictionaryEntry In props
key = CType(entry.Key, String)
'For each property, inspect the property and record its value
'Logon Name
If key = "samaccountname" Then
values = CType(entry.Value, ResultPropertyValueCollection)
userAccountName = CType(values.Item(0), String)
End If
'Department
If key = "department" Then
values = CType(entry.Value, ResultPropertyValueCollection)
departmentHome = CType(values.Item(0), String)
End If
'Job Title
If key = "title" Then
values = CType(entry.Value, ResultPropertyValueCollection)
jobtitle = CType(values.Item(0), String)
End If
'E-Mail
If key = "mail" Then
values = CType(entry.Value, ResultPropertyValueCollection)
email = CType(values.Item(0), String)
End If
'Display Name
If key = "displayname" Then
values = CType(entry.Value, ResultPropertyValueCollection)
displayName = CType(values.Item(0), String)
End If
'Groups User Belongs To (array/collection)
If key = "memberof" Then
values = CType(entry.Value, ResultPropertyValueCollection)
groups = GetGroups(values)
End If
Next
The final section filters the data into the output buffer if from the array list we’ve extracted above, we have matching strings from our original AD Prefix variable. It will then reset, and loop round for the next account.
'Export user details to buffer if it passes the logical test
For Each item As String In groups
'Avoids computer accounts, i.e. ending with $
If userAccountName.EndsWith("$") = False
And item.ToString.StartsWith(Variables.strADPrefix)
Then 'And item.ToString <> (Variables.strADParent)
Output0Buffer.AddRow()
If String.IsNullOrEmpty(userAccountName) Then
Output0Buffer.UserAccountName_IsNull = True
Else
Output0Buffer.UserAccountName = userAccountName
End If
If String.IsNullOrEmpty(domain) Then
Output0Buffer.Domain_IsNull = True
Else
Output0Buffer.Domain = domain
End If
If String.IsNullOrEmpty(item.ToString) Then
Output0Buffer.GroupName_IsNull = True
Else
Output0Buffer.GroupName = item.ToString
End If
If String.IsNullOrEmpty(jobtitle) Then
Output0Buffer.JobTitle_IsNull = True
Else
Output0Buffer.JobTitle = jobtitle
End If
If String.IsNullOrEmpty(email) Then
Output0Buffer.Email_IsNull = True
Else
Output0Buffer.Email = email
End If
If String.IsNullOrEmpty(displayName) Then
Output0Buffer.DisplayName_IsNull = True
Else
Output0Buffer.DisplayName = displayName
End If
End If
Next
groups.Clear()
userAccountName = ""
departmentHome = ""
jobtitle = ""
GroupName = ""
email = ""
displayName = ""
Next
End Using
End Using
End Sub
I also required a function to split the list of groups a user belonged to and store them in another array list.
Private Function GetGroups(ByVal values As ResultPropertyValueCollection) As ArrayList
Dim valueList As ArrayList = New ArrayList()
For Each Item As Object In values
Dim memberof As String = Item.ToString()
Dim pairs As String() = memberof.Split(",".ToCharArray)
Dim group As String() = pairs(0).Split("=".ToCharArray)
valueList.Add(group(1))
Next
Return valueList
End Function
End Class
Once a list of users has been extracted, you will need to do a lookup against the existing list and only stage those that are new. This can be achieved through a simple lookup component . You can then move the rows into the MDS staging table ready for the load into MDS.
After the data flow has processed successfully, the next stage is to sweep the records into MDS using one of the built in stored procedures in the product. The results of which can be seen in the MDS Import View on the MDS site.
The following entities can then be used to power the DAX:
User (Username, Team, AD Login – populated from the script above)
Role (Role – populated manually)
User Role (User, Role – joins users to roles, populated manually)
KPI (KPI – populated manually)
KPI Role (Role, KPI – joins roles to KPIs, populated manually)
These are then processed to the DW by ETL, and a UserKPI view is written across the tables to provide an AD Login to KPI pair.
For filtering the measures in the cube, you can then apply the following piece of DAX to the original measure – Gross Sales in the example below:
Gross Sales:=
IF(
CONTAINS(
CALCULATETABLE( 'UserKPI', UserKPI[KPIName] = "Gross Sales"),
UserKPI[ADLogin], USERNAME()
),
[Gross Sales (ACTUAL)]
,
BLANK ()
)
This concludes this blog post – hopefully this will be useful for anyone that wants to extract users from AD to control access / security within an application. Data could also be loaded direct into DB tables rather than MDS if required.
Please feel free to comment!

During the keynote at the UK Power BI Summit, Microsoft announced some new features coming into Power BI. Chris Webb also provided a session on Azure Analysis Services, hinting at what may be next and more importantly, how they may impact Power BI. Without further ado, I will dive straight in. Power BI Visuals Some exciting visuals were demoed by Will Thompson (Program Manager for Power BI). They were shown in an exclusive preview of Power BI, not available to the general public: - Themes o Finally! Companies can now easily ‘skin’ a suite of reports to use standardised logos, fonts, etc. o The themes will be driven by JSON file. o All themes can be uploaded into PowerBI.com, using the ‘Themes Gallery’. - Slicer Improvements o A numeric, sliding scale slicer will be available as a new configuration option. o Currently only designed for attributes. o Measure slicers are not an immediate focus, but Microsoft are giving it thought. - Matrix/Table Visuals o Whilst they currently exist, a new visual will be added. This includes: § Drill down capabilities. § Include/Exclude – like what you see in charts currently. § Ability to highlight rows and columns, which also cross filters and highlights the selections on accompanying charts. Analysis Services All of the features discussed are for Azure Analysis Services, but I am sure they will also be made available on-prem soon after GA release: - Live Connect to Analysis Services o Add ad hoc measures within the Power BI Model. o It does not update your actual Analysis Services model, therefore, adding measures in Power BI need to be done so with caution. o Currently Tabular only, but should eventually come into Multidimensional too. o Available in both Azure (cloud) and on-prem. - Power Query/M Integration o This is the missing jigsaw to the Power BI/Analysis Services puzzle. - Table Level Security o Ability to implement this and row-level security together. - Migrating existing Power BI Models o This will be awesome! Imagine being able to move your complex Power BI model (which is now performing badly) into an Azure Analysis Services model? o At the moment, you have to create the tabular model again. Conclusion Whilst no official dates have been given, I hope that March’s release will include the visual features, such as themes. The ones specific to Azure Analysis Services are ideas that have come from many blogs (professional experts) and the Microsoft roadmap. If there is a feature that your customer needs or just something you believe should be in Power BI, the Ideas forum is a great place to submit your suggestion. Further Reading For up to date Microsoft announcements, I would definitely advise following the below blogs and twitter users: - Blogs o Power BI Ideas - https://ideas.powerbi.com/forums/265200-power-bi-ideas o Power BI Blog - https://powerbi.microsoft.com/en-us/blog/ o Power BI Blog Announcements - http://bit.ly/2kAv4Id o Chris Webb’s blog - https://blog.crossjoin.co.uk/ - Twitter o Will Thompson (Power BI Program Manager) - @Will_MI77 o Riccardo Muti (SSRS Program Manager) @RiccardoMuti Contact Me If you have any questions, please comment below or contact me directly on Twitter (@CallumGAdatis).

What is the issue While recently working on a SQL Server 2012 Tabular cube in Visual Studio I came across an extremely frustrating issue where the visual layout of my measures was re-ordered. You are most probably aware that the actual structure or ordering of the measures within the workspace of a Tabular table in Visual Studio holds no relevance to how Visual Studio sees or uses the measures however, if you are like me then you will naturally organise your measures into some kind of pattern making them easier to use, maintain or locate. In this instance, I had all of my visible ‘switching’ measures listed in the first column and then their hidden, dependant measures which were different date analysis calculations listed in adjacent columns. For example, all of my ‘Year to Date’ calculations were in the second column, my ‘Month to Date’ calculations were in the third column, etc. Eventually I had over 20 visible switching measures in the first column, each with 8 separate hidden date analysis calculations giving me a total of 160 measures which were all neatly organised in the table’s workspace. An example of what this may look like is shown in the following diagram; At a later point I opened the table to add a new measure and to my surprise all of the existing measures had moved position and been restructured as shown in the following diagram; For the solution, we were using TFS as our source control and by going back through the version history of the SSAS Tabular project I found that a change to a completely different table which was checked in weeks earlier had the changes to this table’s layout checked in with it. Unfortunately, as there had been several other changes and check ins since this change, rolling back wasn’t an option. Here I had two options, painstakingly cut and paste each measure back into its original location or leave the table in its new format and move on. Although I have a slight OCD side to me when it comes to things like this for times-sake of the project I left it how it was; as I mentioned earlier the structure of the measures on a table holds no real relevance in SQL Server 2012 or 2014, it just purely helps with management and maintenance of the groups of measures. A colleague has also run into this issue since and their table was re-structured differently but Visual Studio seems to either add an additional column or take an existing one away and restructure the measures accordingly. Potential solutions I am unaware to the exact reason for this happening and therefore can’t give a suitable way to structure your tables for prevention. There is only one way I found to correct this issue once it has occurred other than manually cutting and pasting your measures back to the way you want. Luckily one of our team had not worked on the project since the initial check-in of the project which had the restructured table and had not got the latest code of the project. I copied their Model.bim.layout file which is located in the project folder in File Explorer and replaced my version with it. After restarting Visual Studio and re-opening my Project I had found that my table structure was back to its original layout. The issue here is that we have multiple developers working on the same Tabular project but different child tables therefore simply going through every table to check it hasn’t been restructured before every check-in would be inefficient as a layout change could easily be missed. The solution that our development team adopted was to separately attach the Model.bim.layout file to the solution in a Solution Items folder and then have this stored in source control as shown in the diagram below; This file should then be used to replace the existing Model.bim.layout file in the developer’s local Tabular project whenever they start working on the Tabular project and then replace the file in the Solution Items after they have finished working on the Tabular project. Checking this file in at the same time as their changes to the Tabular project allows the file to have an independent version history and the separation of this file from the Tabular project prompts the developer to manually check the layout of the tables before checking in the Model.bim.layout file, hopefully adding an additional check to prevent this issue from happening. Although this extra step which is needed whenever changes to the Tabular project are made will add time to the check-in process it is in my opinion worth it to ensure the structure of your tables in a Tabular project are kept consistent and easily maintainable; especially when they start to get large and complex. I have only witnessed or heard of this happening in Visual Studio 2013 with Tabular projects in both SQL Server 2012 and SQL Server 2014. I am hoping that with the many improvements and fixes that Microsoft has released for SQL Server 2016 Tabular projects this will no longer happen but if you have witnessed this in SQL Server 2016 then please comment below and let me know. Also, if you know of a better way to prevent this issue from occurring or have more information on why it happens again, please comment below.

After attending the Pass Summit 2016 a couple of weeks ago, I attended a number of sessions that provided an insight into the direction Microsoft are heading with BI. I thought I’d share this with the community. Looking back to October 2015, the official Reporting Roadmap blog from Microsoft stated their intent in dramatically improving the visualisations, integration and harmonisation of both on-prem (SSRS) and cloud based services (Power BI). Whilst reporting appeared to be a renewed focus, they are constantly driving other areas of BI - such as analysis services and database/data warehousing development in the cloud. Now, for the interesting bit. Here is what we can expect in the SQL Server 20916 BI Stack in the near future: - SQL Server 2018 CTP (to be released within the next month). o This seems very early, considering 2016 has only just been released! Let’s wait for an official Microsoft announcement. - Azure DW o Auto Create Statistics § Currently statistics have to be generated and updated on the fly. o Replicated Distributed Table § One that will excite any cloud based SQL Server developer. § This will reduce data transfer between distributions/nodes and consequently improve performance. - Azure Data Lake o General Availability (GA) is imminent. o Future Features: § Polybase, so that you can connect the two big MMP platforms (ADL and DW). § SSIS connectors (released with GA) for Store. § Python and R in U-SQL. - SSIS o Lots of new connectors, including: § ADLS. § CRM. - SSAS o Migrating Power BI Models into Tabular. § This is coming very soon apparently, but will be developed in Azure SSAS first. o Object Level Security in Tabular § We currently have row level, but there are talk to secure a physical object, not just a row. § Even better news - Microsoft want to integrate the two together, which will make security awesome in Tabular. - SSRS o Supporting (not pinning) Excel reports in RS report. § This will come, but Power BI is the focus right now and we may have to wait a while. - Power BI o Additional and better Pivot Table functionality. o Integrating Active Directory dynamically. o Potential to use Direct Query and Imported modes together – as a hybrid. § Functionality is possible, but performance needs to be weighed up by Microsoft before anything will emerge. o Quick Calcs. § Only ‘Percent of Current Total’ currently available. § Potential is to offer lots more – such as YTD, MAT, Current Year vs. Previous Year, etc. § This is for the users who aren’t familiar with DAX. o Template organisational Content Packs. § The ability to give the user the ability to personalise colours, fonts, etc. within a structured (organisational) content pack. - Power BI Embed o Application developer will be able to limit user sessions and therefore, reduce the charge per 1 hour costs that come with it. There are some features/issues Microsoft do not plan to change. Although, the good thing about Microsoft is that they are community driven, so if you feel strongly about anything (and get support from your peers), they may change their minds. - SSRS o Q&A, Query Editor (Power Query), R integration, etc. not being developed. § Pretty obvious really. Whilst they are going to introduce almost everything from Power BI, some elements of functionality are just not needed for on pre purposes. § R Scripts may come one day, but not a focus right now. - Power BI o Source Control § No immediate plans to integrate with TFS or modularise the pbix files (for a more developer based solution) § Not surprising as this is a self-service tool, not a development team. § Work around is to upload pbix files into OneDrive and use the versioning as an element of Source Control or add a file into Visual Studio. § Keep Voting on PowerBI.com if you want this! (Currently 278 votes). · https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/9677517-source-control - Power BI Embed o Licence model § ‘Speak to Marketing’ is what we were told. Is everyone excited? I certainly am. Microsoft are openly admitting that any new BI related feature will be developed in the cloud first, but this was pretty obvious. For all you on-prem fanatics (me included), at least the bugs/undocumented features will be ironed out before we get to use them! My personal favourite has to be the direction SSRS is going. It was recently labelled a ‘mature’ product, but now RS has suddenly become a cool and current tool to use. The fact Power BI and SSRS will become almost the same product, tells us all that on-prem is still an important part of the Microsoft licensing strategy. I am sure there are lots of other features coming into the BI stack over the next 6 months to a year and these were just the ones I had heard about. Feel free to comment if you have additional ones to share.

This blog solely focuses on the new Analysis Services features of SQL Server 2016 CTP2. For anyone who may missed it – click here to view my opening blog on the Database Engine. Although there have not been any major Analysis Services (SSAS) enhancements in CTP2, it is pleasing Microsoft are still looking to improve this part of the BI Stack. The majority of them seem to be geared towards Tabular and DAX, although there are plans to release further Multidimensional functionality in CTP2 and beyond. There are five key enhancements for SSAS: 1. Tabular Model Partitions (Tabular). 2. Many 2 Many (Tabular). 3. Analysis Services PowerPivot mode. 4. New DAX Functions (Tabular). 5. Enable/disable attribute hierarchies. Tabular Model Partitions SQL Server SSAS 2016 CTP2 includes new parallel processing functionality for tables with two or more partitions, increasing processing performance. There are no configuration settings for this. More information on this feature is displayed below: 1. IN SSAS Tabular, partitioning big data marts will help drastically when processing new data. Instead of having to process all data, you can just partition the relevant ones. 2. In order to create, manage, and process partitions in SQL Server Management Studio, you must have the appropriate Analysis Services permissions defined in a security role. 3. Parallel processing is also available in SSAS. This happens automatically, when processing a table with more than one partitions. Although you can choose to independently process a partition if required. Many to Many Dimensions (Tabular) There isn’t too much to show or say here yet apart from the fact that this feature is FINALLY HERE! Gone are the complications of workarounds (thanks to Marco Russo and Alberto Ferrari), we can now just use the Many to Many functionality in Tabular. Multidimensional already has this feature, which is one of the reasons why developers have yet to buy in to Tabular. Unfortunately, Microsoft have yet to reveal much detail on to how this feature will work. I just hope it performs as well as the current workarounds. Analysis Services PowerPivot mode To install PowerPivot for SharePoint is now a lot easier. If you already know how to configure SharePoint 2013, this will help you greatly. Taken from the MSDN Microsoft website, you simply use the Install Wizard and do the following: 1. Select ‘Analysis Services’ from the standard Instance Feature. 2. Choose ‘Analysis Services server mode’ and configure administrators (screenshot below). New DAX Functions There are many new DAX functions, which are all detailed on MSDN website – click here for further detail. I have picked out 5 functions that will be highly beneficial when creating an SSAS Tabular cube. I am looking at this form a very BI focused background so I would recommend looking at all of the new functions to see if they benefit your needs. 1. TOPn a. This is not actually a new function but has been updated. b. Now allows TRUE/FALSE/ASC/DESC to specify sorting direction. 2. CALENDAR a. Returns a table with a single column named “Date” that contains a common set of dates. The range of dates is from the specified start date to the specified end date. b. See Similar – CALENDARAUTO 3. MEDIAN a. Very simple – returns the median numbers in a column. b. See Similar – MEDIANX (uses an expression). 4. NATURALINNERJOIN a. Inner join of a table with another table. The tables are joined on common columns in the two tables. If the two tables have no common column names, an error is returned. b. See similar – NATURALLEFTOUTERJOIN 5. SUMMARIZECOLUMNS a. Enables you to group data together and return a summary table. Enable/Disable Attribute Hierarchies Again, this functionality is tailored towards SSAS Tabular. There is a new setting that ties back to the Tabular mode metadata 1. ColumnUsage property a. DAXUsage i. Attribute hierarchies can only be used in measures. b. UnrestrictedUsage i. Can be set in the XMLA (in SSDT) or by using an ALTER statement in the Tabular model. ii. An example taken from MSDN: <Alter> <ObjectDefinition...> <Database> <Dimensions> <Dimension> <Attributes> <Attribute> <ddl500:ColumnUsage value=”ddl500_500”> DAXUsage | UnrestrictedUsage </ddl500:ColumnUsage> Multidimensional Features The full SQL Server 2016 release will include some key enhancement to Multidimensional SSAS. These include: 1. Netezza as a Data Source (Netezza Data Warehouse | IBM - NDM Technologies). 2. General performance improvements. 3. Unnatural hierarchies. 4. Distinct counts. 5. DBCC support. Checks the logical and physical integrity of objects in the specified database. 6. Expose on-premises multidimensional cubes in the cloud with Power BI. I will provide further information on the improvements, as and when Microsoft announce them. References For more information on all of the new SSAS SQL Server 2016 features, the below resources/blogs are highly recommended. · Official Microsoft Page - https://msdn.microsoft.com/en-us/library/bb522628(v=sql.130).aspx · Brew your own Business Intelligence Blog – http://byobi.com/blog/2015/05/ssas-related-enhancements-in-sql-server-2016/ · Jorg Klein SQL Server Blog - http://sqlblog.com/blogs/jorg_klein/archive/2015/05/22/bi-on-your-terms-with-sql-server-2016.aspx

I recently had a request from a client who wanted to display use a parent child hierarchy to navigate their report. The issue with this however is when you click the hierarchy the report refreshes, the data is updated in the charts, but the hierarchy collapses to its original state as below. This wasn’t suitable for the clients needs as it was a large hierarchy with several levels of depth. What was required was to maintain the state of the hierarchy so it would stay expanded during report refresh. After a little thought I came up with the following solution: 1. Setup a standard report with an indented hierarchy with expandable sections. This has been covered in many places so I’ll only give a brief overview, for this example report I have used adventure works multidimensional model with the following MDX query as the source for the hierarchy data:- Add the fields in a matrix like this: Right click the Organizations text box, select ‘Textbox Properties’ and in the Alignment tab set the Left padding property to the following value (change the number to adjust indentation): Following this you need to setup groupings in order for the expanding hierarchy. Add a grouping on the UniqueName property On the visibility section set the toggle property to HIDE (ignore the expression for now -) and set the "’Display can be toggled by this report item” property as per the below. In the advanced section set the recursive parent property to [Organisations.ParentUniqueName] and this should give a working report with an expandable hierarchy. 2. Add a hidden parameter to the report called OrganisationLevel. Set the default value to the top of the hierarchy. Add another column to the matrix, add text to it for the link. Select the text and right-click the selected text navigate to properties and place an action to go to original report and pass the OrganisationLevel parameter with a value of [Organisations.UniqueName] 3. Add a dataset dsHierarchyVisibleMembers with the below MDX query – this query uses the OrganisationLevel parameter to work out which level in the hierarchy the user has clicked on and the entities at the same level which we can then use to control the state of the expanding hierarchy. Now set the visibility property on the row grouping to the following value: The last step is to set the hierarchy collapsed/expanded state, which we using the following formula for the ‘InitialToggleState’ property of the text box to the following. Once these steps are completed clicking one of the elements in the report will pass the parameter, but the hierarchy state is maintained between clicks. Clicking View on ‘Northeast Division’ the left report displays the following in the right – note the chart has been updated but that the hierarchy expansion state remains as it was before the user clicked. That’s all for now, hope this is of use.

I was recently approached by a client where we have assisted in implementing a Microsoft Analysis Services tabular solution and a supporting management pack of SSRS reports. They were asking about utilising some of the KPI’s defined in their tabular cube in the SSRS reports using DAX queries. I was at a bit of a loss and most web help seemed to suggest it couldn’t be done:
Blog comment from 2012
However, with a bit of time to investigate it appears Power View does now support Tabular KPI’s and as such a DAX query should be able to pull them out.
My KPI base measure is named “Highest Ranking” and I am able to view its value, status or goal in Power View. A little work with every SQL developer’s best friend Profiler and I had what I was looking for. It appears that when you create a KPI SSAS is defining some measures in the background which do some of the work of the MDX KPI functions.
Therefore the following DAX query against my cycling Demo cube contains the expected results and could therefore be used in SSRS reporting. EVALUATE ADDCOLUMNS(
VALUES('DimRider'[Name])
,
"Highest_Ranking", 'FactRanking'[Highest Ranking],
"Highest_Ranking_Goal", 'FactRanking'[_Highest Ranking Goal], "Highest_Ranking_Status", 'FactRanking'[_Highest Ranking Status]
) ORDER BY 'DimRider'[Name]
I also tested for the existence of other KPI functions such as Trend and Weight but these do not appear to be present. It is also interesting that the use of a KPI over a measure does not change the measure name but just group it in the field list and as such there is no need for a value function.
For more info on KPI’s in Power View there is additional documentation here. I am currently unsure of the required versions for this functionality so if it is missing from your environment I would love to hear from you.

Problem SSAS translations are known to deal with warehousing language requirements well. We recently had a requirement for a multi-language report suite on top of a user browsed cube. First thought was SSAS translations, these will pick up the users local language settings and allow an elegant solution to the problem. However a requirement was put forward for a language picker in the SSRS report suite to over-ride the users locale if they so desire. This was not so obvious. Data source We started with a list of languages to build a picker from, this was built by importing the table available here to a SQL Server and building it as a dataset in our report: http://msdn.microsoft.com/en-us/library/0h88fahh.aspx It is important to bring through the decimal value in column 4 and the short string in column 2 as we will see later. Parameter Build Use the dataset on MSDN to build a language select parameter. Use the language name in the label field and the decimal locale identifier in the value field. The setup should look something like this: Default value setup If you just need English as the default use the UK code 2057 as your default value, however a more elegant final solution is a little more complex. Reporting services provides the built in variable User!Language which holds the users language in the short string ‘en-gb’ style format. So although we can access the users language it needs translating into a localeId before we can pass it to SSAS. Fortunately for us the same table we used above can be used to translate between short string and locale. I achieved this using a further hidden parameter defaulted to the built in User!Language variable, the parameter was set up as detailed below: Using the dataset where LanguageCode = short string in our source table If this parameter is run first (it must be earlier in the report parameters list) it will contain the users language in the form ‘en-gb’ in the value and ‘2057’ in the label field. It is then simple to set the default of the first dropdown parameter to the value of the hidden parameter’s label using the following syntax. =Parameters!UserLanguage.Label I am open to suggestions of a way to get the default without a hidden parameter or a large switch statement so if you find something more elegant please let me know! It is also worth noting that if a user has a language that your source table does not have an entry for you are obviously going to lose the defaulting behaviour. Parameter Ordering It is essential to make sure that the parameters are ordered in your parameter list in the sequence they need to be evaluated in. In this case the hidden user language parameter needs to be first, followed by the visible language drop down parameter. Only then should other report parameters be added. Passing Parameter value to SSAS In order for SSAS to provide us the data in the language requested we need to pass the locale to analysis services. We can do this by over-riding the locale in the connection string. Go to your report data source and double click for properties. Then we need to edit the connection string expression as circled. Here we need to make sure our connection to SSAS is provided with the locale from the parameter. We use the following expression to build our connections string. ="Data Source=localhost;Initial Catalog=" & CHR(34) & "Adventure Works DW 2008R2" & CHR(34) & ";Locale Identifier=" & Parameters!Language.Value (CHR(34) is the “ character that we wanted to escape properly.) This builds the string detailed below for the parameter selection Spanish. =Data Source=localhost;Initial Catalog="Adventure Works DW 2008R2";Locale Identifier=3082 This should now preview and allow you to select the language and your data sets will return any translations you have present in the cube. Notes If you are using a shared data source reference this is not going to work for you as it is only possible to expression a data source held in the report. In this case perhaps you can use a solution along the lines of the one detailed by Mosha here: http://sqlblog.com/blogs/mosha/archive/2006/10/07/member-caption-translations-in-mdx.aspx Another warning is to finish your report design first because you cant refresh datasets and metadata once the expression is set Tested on 2008R2 only. Hope this helps everyone and allows you to select your existing translations in a flexible way through SSRS. Your comments are welcome.