We expect three parameters: the path to the workbook, the name of the “tab” (worksheet), and the name of the table into which we plan to load the data.

We’re using OPENROWSET to connect to the Excel workbook.

We’re using the Microsoft Ace OLEDB 12.0 driver (as opposed to the older Microsoft JET driver). If the right version of Excel (12.0 when this code was written) isn’t installed alongside SQL Server, you’ll need to find and install this driver. A simple web search was all it took for us to find it.

We’re specifying the parameters for loading the worksheet. Here are the important ones:

Excel 12.0: This is the expected Excel workbook version.

HDR=YES: We expect a header row (which will provide column names for the target table).

IMEX=1: We can’t guarantee that all values in column are of the same data type, so we tell SQL to expect the unexpected, to not flip out.

DATABASE=(path): The workbook path.

If the load fails for any reason, we roll back the transaction and log the error message.

Now we’re getting somewhere. In my next post, we’ll use these building blocks to assemble our automated data loader.

Recently we discussed the ability to access the command line via T-SQL with xp_cmdshell. Today I’d like to take that a step further and introduce some procedures that perform common file-system tasks. This will be the first part in a short series that culminates in the ability to automate loading Excel data.

This may seem like an exercise in futility. After all, you can load Excel data through the import wizard in SQL Server Management Studio, and you can do the same with Integration services. However, I’ve been in a couple of situations where a client rejected the use of SSIS, and an automated load was still a project requirement. This solution was the result.

Before jumping into the code, let’s talk about the process. There was no custom app for data entry, so Excel was the preferred approach – primarily because non-technical team members would still be able to perform the work. Ideally, they would create new (or modify existing) workbooks, and just drop them in a secure shared folder on the network. SQL Server would scan the shared folder every five minutes and load everything it found there. Since SQL Server could only load from a local TEMP folder for which it had READ permissions, this would mean copying the workbooks across the network.

This proved to be inefficient for a couple of reasons:

Did we really want to copy and load every Excel file in the folder every five minutes? Unlikely. It’s possible that some of the workbooks weren’t ready for prime-time. Also, it’s possible that not every worksheet in a workbook was meant to be loaded. Thus, a requirement surfaced that we enable selective data loads.

We decided to do this by creating a workbook called Manifest.xlsx. It originally contained a single worksheet with two columns – a workbook name and a worksheet name. SQL Server would only load worksheets identified in this manifest.

Did we want to reload workbooks that hadn’t changed since the last load? Nope. No point. So, it became a requirement that SQL Server track the “modified” timestamp of each file in the manifest.

First, let’s assume we’ve got a list of workbooks from the manifest. We need to make sure they exist. Even the most recent version of Windows finds it difficult to copy and load nonexistent files.

CREATEPROCEDUREext.usp_FileExists(

@PathVARCHAR(255)

,@ExistsBITOUTPUT

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

SET@Exists= 0;

CREATETABLE#Output(lineVARCHAR(MAX));

DECLARE@CommandVARCHAR(255)=‘if exist “’+@Path+‘” echo 1’;

INSERTINTO#Output

EXECxp_cmdshell@command;

SELECTTOP 1 @Exists=ISNULL(line, 0)

FROM#Output;

DROPTABLE#Output;

We use xp_cmdshell to determine issue a command that echoes “1” if the specified path exists. Then, we just look to see if anything was echoed by the command.

Like most of what we’ll be looking at here, this has to be a stored procedure – even though a scalar-valued function would be more convenient. As we discussed in a previous post, we can’t use INSERT EXEC inside a function, and that’s critical to our success here.

Next, let’s encapsulate copying a file – which, for our purposes, will be across the network.

CREATEPROCEDUREext.usp_CopyFile(

@SourcePathVARCHAR(255)

,@DestinationPathVARCHAR(255)

,@OverwriteBIT= 0

,@SuccessBITOUTPUT

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

SET@Overwrite=ISNULL(@Overwrite, 0);

CREATETABLE#Output(lineVARCHAR(MAX));

DECLARE@CommandVARCHAR(255)=

‘copy ’

+‘“’+@SourcePath+‘” ’

+‘“’+@DestinationPath+‘” ’

+IIF(@Overwrite= 1,‘/Y’,‘/-Y’);

INSERTINTO#Output

EXECxp_cmdshell@command;

SELECT@Success=IIF(COUNT(line)= 1, 1, 0)

FROM#Output;

DROPTABLE#Output;

The source and destination paths here can be UNC paths, which will enable us to reach across the network. The “overwrite” parameter affects how the command line is set to “silently respond” to overwrite-requests.

Now, what about when we need to delete a workbook from our local TEMP folder?

CREATEPROCEDUREext.usp_DeleteFile(

@PathVARCHAR(255)

,@SuccessBITOUTPUT

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

CREATETABLE#Output(lineVARCHAR(MAX));

DECLARE@CommandVARCHAR(255)=‘del “’+@Path +‘”’;

INSERTINTO#Output

EXECxp_cmdshell@command;

SELECT@Success=IIF(COUNT(line)= 0, 1, 0)

FROM#Output;

DROPTABLE#Output;

These commands haven’t changed much since the Windows command line was the DOS command line, so DEL probably looks as familiar as COPY. This might be a good time to mention that SQL will access the command shell as its service account, so take appropriate security precautions. Only you can prevent forest fires.

Next, we need to get creative and find the modified date of a file. Spoiler alert: there are two treats in this goodie bag. First, we’re going to use FIND (Windows’ foray into GREP) to extract the relevant line from some DIR output. Second, we’re going to respond to some specific Windows messages.

CREATEPROCEDUREext.usp_GetModifiedDate(

@PathVARCHAR(255)

,@NameVARCHAR(255)

,@ModifiedDateDATETIME2(0)OUTPUT

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

CREATETABLE#Output(lineVARCHAR(MAX));

DECLARE@CommandVARCHAR(255)=

‘dir “’+@Path+‘\’+@Name+‘” | ’

+‘c:\windows\system32\find.exe /i “’+@Name+‘”’;

INSERTINTO#Output

EXECxp_cmdshell@command;

SELECTTOP 1

@ModifiedDate=

CASE

WHENlineIN(

‘File Not Found’

,‘The network path was not found.’

,‘The network name cannot be found.’

,‘The system cannot find the file specified.’

)

THENNULL

ELSECONVERT(DATETIME2(0),SUBSTRING(line, 1, 20))

END

FROM#Output;

DROPTABLE#Output;

Now you can tweet that you used FIND, and there was no time machine involved. Congrats.

One more support procedure before we take a break (and resume tomorrow). This procedure will identify the Windows TEMP folder, so we can use it as a landing zone…like a BOSS. (Note:, the final version of the code doesn’t use this, but I’m leaving it here for you to use in an some other imaginative, BOSS-like fashion.)

CREATEPROCEDUREext.usp_GetTempFolder(

@TempFolderVARCHAR(255)OUTPUT

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

CREATETABLE#Output(lineVARCHAR(MAX));

DECLARE@CommandVARCHAR(255)=‘echo %temp%’;

INSERTINTO#Output

EXECxp_cmdshell@command;

SELECTTOP 1 @TempFolder=lineFROM#Output;

DROPTABLE#Output;

By now, that should just look like more of the same. In tomorrow’s post we’ll talk about logging, manifesting, and more.

Who doesn’t enjoy a healthy dose of binary theory? If the answer is you, best back away slowly and close the door. If you stick around, however, you might just learn how to (mis)use T-SQL to leverage bitmask flags.

I say “(mis)use” because, frankly, T-SQL provides a very efficient mechanism for bit flags. It’s the bit data type. It takes up (as the name implies) a single bit of storage, and can only have one of two values – 0 or 1. It’s as efficient a flag as you could ask for.

That said, I come from a programming background, where bitmasks and their innate ability to encapsulate a set of related flags are highly regarded. I’ve got an ongoing personal project (the kind that sits in a dusty shoebox with no air-holes in a dark corner of a basement closet in an abandoned house inside an active volcano…you know what I mean) in which I’m trying to push as much number crunching as possible into the database. So, this became an interesting exercise in SQL tomfoolery.

Binary in a Nutshell

In case you’re unfamiliar with binary, here’s a quick primer.

If you’re a computer (which is unlikely), a series of bits is all you need to represent any number. Bits are expressed using only two numerals (0 and 1, a.k.a “true” and “false”, “on” and “off”, etc.) and therefore the binary number system is referred to as base two.

By contrast, we use the decimal number system, also known as base ten, in everyday life. Base ten uses ten numerals (0-9). When counting, if we run out of numerals in a single column (i.e. after we count past nine), we “roll over” to the next column, start counting in that column at 1, and reset the previous column to 0. That’s just a detailed way of saying we go from 9 (a one-digit number) to 10 (a two-digit number). The important thing here is that the 1 in the number 10 stands for “one ten” and the 0 stands for “zero ones”. By the same token, in the the number 24, the 2 stands for “two tens” and the 4 stands for “four ones”. We tend to think of “24” and “twenty-four” as synonymous, but that’s only true in the context of base ten.

In base two, zero is 0 and one is 1. Simple enough. When counting to two, however, we’ve already exhausted the available numerals. So, we “roll over” to the next column, start counting in that column at 1, and reset the previous column to 0. Those are the exact same words I used in the previous paragraph. By following those directions in base two, though, we find that “10” represents the number two. And “11” represents the number “three”. And, by rolling over again, we find that “100” represents the number “four”.

To demonstrate this a bit more visually, the column values in a base ten number are as follows:

…

1000

100

10

1

.The column values are the powers of the base, or in this case, the powers of ten.

…

103

102

101

100

.These, then, are the column values in a base two number:

…

1024

512

256

128

64

32

16

8

4

2

1

.These can also be expressed as the powers of two.

…

210

29

28

27

26

25

24

23

22

21

20

.So, here are two ways of representing the number “twenty-four”. First, in base ten:

…

1000

100

10

1

…

0

0

2

4

.And now in base two:

…

1024

512

256

128

64

32

16

8

4

2

1

…

0

0

0

0

0

0

1

1

0

0

0

.Just as we can use base ten to express that “two tens plus one four equals twenty-four”, we can use base two to say that “one sixteen plus one eight equals twenty-four”.

If you’ve never worked with binary numbers before, you might enjoy converting some decimal numbers to binary before moving on. How would you write the decimal number 123 in binary? How about your age, or the current year?

I’ll forego additional conversation around these basics. A web search can fill in any details I’ve neglected, but I’ve touched on the parts that are most relevant to our foray into T-SQL bitmasks.

Bitwise Operators

T-SQL provides a wide variety of mathematical operators, many of which we use regularly (e.g. for addition or multiplication). Less common, though, are those which inspect and operate on the bits that compose our numbers.

This chart summarizes the bitwise operators relevant to this post.

Bitwise

Operation

T-SQL

Operator

Description

T-SQLExample

AND

&

Determines which bits are “on” in both operands

SELECT 5 & 3

= 0101 & 0011

= 0001

= 1

OR

|

Determines which bits are “on” in either operand

SELECT 5 | 3

= 0101 | 0011

= 0111

= 7

XOR

^

Determines which bits are “on” in only one operand

SELECT 5 ^ 3

= 0101 ^ 0011

= 0100

= 4

NOT

~

Flips all the bits of a single operand

SELECT ~5

= ~0101

= … 1111 1010

= -6

.For the sake of time, I’ll trust that the chart above is self-explanatory. I might talk about XOR, NOT, and other bitwise operators (such as the shifting operators) in a future post. Today, we’re going to focus on AND and OR.

The AND operator looks for the bits that are common to two numbers. In other words, it determines which bits from one number are also part of another. This is similar to the way INTERSECT looks for the values that are common to two result sets.

The OR operator looks for all the bits that are used by either of two numbers. It effectively combines those numbers, as you can see in the chart above. This is similar to the way UNION combines the values from two result sets.

Here’s the cool part. Since we can essentially use OR to combine numbers, if we stick to powers-of-two (each of which have only one bit enabled), we can create a number that represents a combination of those powers of two.

16 | 4 | 1= 10000 | 100 | 1= 10101= 21

Now, we can use the AND operator to see whether one of our powers-of-two is part of our composite number.

21 & 16= 10101 & 10000= 10000= (true)

Since the result of the AND operation is non-zero, it is true that the 16 “flag” is set in the number 21.

Here’s another example:

21 & 8= 10101 & 01000= 00000= (false)

Since the result of the AND operation is zero, it is false that the 8 “flag” is set in the number 21.

.Very good – this confirms the bitwise math we just performed. Let’s crank it up a notch and use bitwise operators to inspect the bits themselves.

DECLARE@xINT= 0;

SET@x=@x| 16;

SET@x=@x| 4;

SET@x=@x| 1;

SELECT

@xASx

,IIF(@x& 16 > 0, 1, 0)AS[16]

,IIF(@x& 8 > 0, 1, 0)AS[8]

,IIF(@x& 4 > 0, 1, 0)AS [4]

,IIF(@x& 2 > 0, 1, 0)AS [2]

,IIF(@x& 1 > 0, 1, 0)AS [1];

Survey says…

x

16

8

4

2

1

21

1

0

1

0

1

.These results continue to support assertions from earlier in this post – about AND, OR, and the composition of base two numbers.

At this point, we’ve discussed binary theory and bitwise operations, and we’ve seen two SQL statements that should give you plenty of fodder for your own experimentation. We’ve only just scratched the surface of each topic, but hopefully you’ve seen enough to take the next steps yourself.

Before I close, let’s look at one last little gem. We’ll use a recursive CTE (discussed in several of my previous posts) to create a list of the numbers zero to thirty-one, and we’ll use bitwise AND to unmask the bits and examine the binary composition of each number.

DECLARE@LastNumberINT= 31;

WITH

SequenceAS (

SELECT 0 ASx

UNIONALL

SELECTx+ 1

FROMSequence

WHEREx<@LastNumber

)

SELECT

x

,IIF(x& 16 > 0, 1, 0)AS[16]

,IIF(x& 8 > 0, 1, 0)AS[8]

,IIF(x& 4 > 0, 1, 0)AS[4]

,IIF(x& 2 > 0, 1, 0)AS[2]

,IIF(x& 1 > 0, 1, 0)AS[1]

FROMSequence;

Brace yourself for a beautiful binary pattern.

x

16

8

4

2

1

0

0

0

0

0

0

1

0

0

0

0

1

2

0

0

0

1

0

3

0

0

0

1

1

4

0

0

1

0

0

5

0

0

1

0

1

6

0

0

1

1

0

7

0

0

1

1

1

8

0

1

0

0

0

9

0

1

0

0

1

10

0

1

0

1

0

11

0

1

0

1

1

12

0

1

1

0

0

13

0

1

1

0

1

14

0

1

1

1

0

15

0

1

1

1

1

16

1

0

0

0

0

17

1

0

0

0

1

18

1

0

0

1

0

19

1

0

0

1

1

20

1

0

1

0

0

21

1

0

1

0

1

22

1

0

1

1

0

23

1

0

1

1

1

24

1

1

0

0

0

25

1

1

0

0

1

26

1

1

0

1

0

27

1

1

0

1

1

28

1

1

1

0

0

29

1

1

1

0

1

30

1

1

1

1

0

31

1

1

1

1

1

Yet another fun SQL experiment. Hope your brain is tingling. See you next time.

This is a convenience procedure I’ve been using for some time. Sure, it adds some execution overhead, so I don’t use it in mission-critical situations when milliseconds count. But it saves me the annoyance of checking for the existence of a table (especially a temp table) if I’m scripting a DROP.

There are a variety of scenarios in which it would be beneficial to access the command line from SQL Server. The xp_cmdshell stored procedure allows us to do just that. It can be enabled and disabled using sp_configure (we’ll get to that momentarily), and it returns lines of command output as rows, as if the operating system had been queried.

This post will demonstrate using xp_cmdshell to ping a remote machine. In a previous role, my team used this technique in an ETL process that needed to capture stats on connectivity with hundreds of remote endpoints. It worked like a charm.

I’m sure I don’t need to go into detail about why this might represent a security risk, and I’ll assume you’re capable of fool-proofing any SQL instance under your purview.

As always, let’s stay organized. Here’s a schema to contain our new objects.

CREATESCHEMAext;

We named it as such since we were dealing with resources external to SQL Server.

Now, before we can use xp_cmdshell, we need to determine if it’s enabled. This function will do the job nicely.

CREATEFUNCTIONext.svf_IsCmdShellEnabled()

RETURNSBIT

AS

BEGIN

RETURN (

SELECTCONVERT(BIT,value_in_use)

FROMsys.configurations

WHEREname=‘xp_cmdshell’

);

END

Using our function is easy.

SELECText.svf_IsCmdShellEnabled()ASIsCmdShellEnabled;

You can simply called the function in an “if” statement, as in, “if xp_cmdshell is not enabled, enable it.”

So, how do we enable it?

CREATEPROCEDUREext.usp_EnableCmdShell

AS

EXECsp_configure‘xp_cmdshell’, 1;

RECONFIGURE;

For good measure, let’s assume you want to disable it when you’re not the one using it (smells like security).

CREATEPROCEDUREext.usp_DisableCmdShell

AS

EXECsp_configure‘xp_cmdshell’, 0;

RECONFIGURE;

Looks pretty similar, except for that all-important “enabled” flag.

Now, enable our new friend (if you haven’t already), and let’s implement the ping procedure. Take a moment to consider how the command line output is being processed. I’m sure you can come up with much more clever uses than this.

In my last post, I described a problem with long-running cube processing and the creative approach my team took to resolve it. We walked through the basics of issuing XMLA commands to SSAS through a linked server, and we built stored procedures to encapsulate the most common processing operations. In this post, we’re going to use those database objects to implement a database swapping process.

We’re one major step away from success, so let’s talk about how our swap process is going to work. Foundationally, we have two SSAS databases. Let’s give them the IDs Instance1 and Instance2. At any given time, one of them will be “active” (named “Production”) and one will be “inactive” (named “Processing”).

Fortunately, when we connect to an SSAS database (from Excel, for example), we connect by name – not by ID. So, if our users connect to the database named “Production”, they can trust that it’s whichever database was most recently, successfully processed – regardless of its ID.

Here’s the pseudo-code for our swap process.

Process the database named “Processing”.

Rename the “Processing” database “Temp”.

Rename the “Production” database “Processing”.

Rename the “Temp” database “Production”.

Dance in the streets.

Step 5 is optional.

Let’s walk through this and figure out what we need to build. First, we need SQL Server to keep track of which database is active so that we can issue simple commands like “process the Processing database, then swap the databases”. To do that, we need to keep the name of the active database in a table.

This might seem like overkill, but let’s create a simple storage system for name-value pairs. (It wasn’t overkill in our environment; this is where we kept a lot of ETL and maintenance variables.)

First, we want another new schema to keep us organized.

CREATESCHEMAutil;

Then, we need a table for our name-value store. Note that we needed the ability to make some parameters read-only, even though that isn’t relevant to this post.

CREATETABLEutil.Config(

ParamNameVARCHAR(255)NOTNULL

,ParamValueVARCHAR(255)NOTNULL

,[ReadOnly]BITNOTNULL

,[Description]VARCHAR(500)NULL

,CONSTRAINTPK_ConfigPRIMARYKEYCLUSTERED (ParamNameASC)

);

Next, we need a simple way to save data to the store.

CREATEPROCEDUREutil.usp_SetParam(

@ParamNameVARCHAR(255)

,@ParamValueVARCHAR(255)

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

DECLARE@ReadOnlyBIT;

SELECT@ReadOnly=[ReadOnly]

FROMutil.Config

WHEREParamName=@ParamName;

IF@ReadOnlyISNULLOR@ReadOnly= 1 RETURN 1;

UPDATEutil.Config

SETParamValue=@ParamValue

WHEREParamName=@ParamName;

RETURN 0;

I left the read-only handler in there for you. Bonus.

And, of course, we need a way to retrieve a value we’ve saved.

CREATEFUNCTIONutil.svf_GetParam(

@ParamNameVARCHAR(255)

)

RETURNSVARCHAR(255)

AS

BEGIN

RETURN (

SELECTParamValue

FROMutil.Config

WHEREParamName=@ParamName

);

END

Now we just need some wrapper objects for our ssas schema. The first is a procedure that sets the active database ID.

CREATEPROCEDUREssas.usp_SetActiveDatabaseID(

@ActiveDatabaseIDVARCHAR(255)

)

AS

SETNOCOUNTON;

SETANSI_WARNINGSOFF;

IF@ActiveDatabaseIDIN(‘Instance1’,‘Instance2’)

EXECutil.usp_SetParam‘ActiveDatabaseID’,@ActiveDatabaseID;

Next is a function that retrieves the ID of the active database.

CREATEFUNCTIONssas.svf_GetActiveDatabaseID()

RETURNSVARCHAR(255)

AS

BEGIN

RETURNutil.svf_GetParam(‘ActiveDatabaseID’);

END

For simplicity later, we’ll also need to know the ID of the inactive database.

CREATEFUNCTIONssas.svf_GetInactiveDatabaseID()

RETURNSVARCHAR(255)

AS

BEGIN

DECLARE@ActiveDatabaseIDVARCHAR(255)=ssas.svf_GetActiveDatabaseID();

RETURN

CASE

WHEN@ActiveDatabaseID=‘Instance1’

THEN‘Instance2’

ELSE‘Instance1’

END;

END

Finally we can bring it all together in two stored procedures. One will process the inactive database…

Now, in consecutive steps of a SQL Agent job, we can simply issue these simple, simple calls:

EXECssas.usp_ProcessInactiveDatabase;

EXECssas.usp_Swap;

And…there was dancing in the streets.

This is slightly different from the actual implementation my team performed:

We expanded this concept to work with multiple “scenarios” – sets of Processing and Production databases for different departments (Finance, Sales, etc.). We modified the wrapper procedure/functions in the ssas schema to accept a @Name parameter that affected the “ParamNames” in the Config table. That’s out of scope, but a good exercise for the reader.

We didn’t use usp_ProcessDatabase. Our source data was downright awful, so our cube processing regularly failed at unpredictable points. Given how long it took to process, we couldn’t afford to start processing from scratch if something went wrong. Instead, we created steps in our Agent job to process each Dimension and each Measure Group independently. (We built the necessary procedures for that in my previous post.) That way, if an object failed to process, we could investigate the source data and re-start the Agent job at the step that failed.

In the immortal words of Forest Gump…that’s all I have to say about that. I hope you find it useful.

Let’s be honest…as much as I love the SQL Server stack, I’m not a huge fan of Integration Services. I’d rather write MERGE statements than use RBAR UPSERTs to process a data mart, and I prefer the simplicity of Agent jobs that issue T-SQL commands to those that execute packages. So, when I came across the following issue while processing Analysis Services databases, it seemed like a good time to remove SSIS from the equation.

It was taking a long time to process our cubes – upwards of six hours – and during that timeframe the users were left without cube access. Our source data wasn’t available until almost 6am, so our cubes sometimes weren’t available until after noon. And that was a best-case scenario; it assumed that someone responded quickly if processing failed.

We’d already tried redesigning the cubes to reduce their processing time. The business requirements coupled with the granularity and volume of our data left us between a rock and a hard place. If we’d had a budget, we might have added another SSAS server and set up something akin to load-balancing or failover – where we processed one cube and then “swapped” it into production by switching two DNS entries. Necessity being the mother of invention, we got creative and rolled our own in-place cube-swap mechanism. Using only a single instance of SQL Server.

The basic idea was simple – we’d deploy and maintain two identical SSAS databases – a “Production” database and a “Processing” database. We would process the “Processing” database each night, and when processing was complete, we would rename both of the cubes to essentially “swap” them. The swap itself proved to take only a couple of seconds. Imagine our relief when the down-time went from several hours to several seconds.

In order to perform the process I just described, we needed to be able to do the following with T-SQL:

Issue XMLA commands to SSAS. Whereas MDX is the query language used with SSAS databases, XMLA is the XML variant used to administer SSAS databases.

Issue XMLA to processorunprocess SSAS databases, dimensions and measure groups. Being able to do all of these things would give us macro and micro-level control in a variety of administrative contexts.

Issue XMLA to rename SSAS databases.

Swap SSAS databases by renaming them.

Fair warning – I won’t be talking about security, injection prevention, or anything outside of essential SQL Server objects and processes. Certainly some of those topics are of critical importance; I trust that you can take steps to address them if you find yourself in need of this solution.

The crux of our approach was a linked server to SSAS. Setting that up was simple. Budget being tight, SSAS and DBE were on the same box, so we just ran this script (note the use of @@SERVERNAME):

EXECmaster.dbo.sp_addlinkedserver

@server=‘SSAS’

,@srvproduct=‘’

,@provider=‘MSOLAP’

,@datasrc=@@SERVERNAME;

Note: I haven’t been able to include these code blocks without the quotes getting “prettified”. You’ll have to fix those if you’re copying-and-pasting them into Management Studio.

We’re going to send Remote Procedure Calls (RPC) through this linked server, so we need to enable RPC Out.

Since we were planning to build several stored procedures, we decided to create a schema to keep them organized.

CREATESCHEMAssas;

This stored procedure will issue an XMLA command to SSAS using our new linked server:

CREATEPROCEDUREssas.usp_Execute(

@XMLAXML

)

AS

DECLARE@CommandVARCHAR(MAX)=CONVERT(VARCHAR(MAX),@XMLA);

EXEC (@Command)ATSSAS;

No rocket science here; we’re just accepting an XML parameter, converting it to a VARCHAR and executing it via our linked server. Note the special syntax for EXEC (highlighted).

At this point, I generated a lot of XMLA. To do so, I connected Management Studio to Analysis Services, right-clicked on various objects (databases, cubes, dimensions and measure groups) and selected Process. When the processing window appeared, I pressed Ctrl-Shift-N (to script the processing action to a new query window), then clicked cancel to close the processing window. After we removed some unnecessary namespace declarations, it looked like this:

Again, no rocket science. We’re just building the XMLA command to process a database, then executing it with the stored procedure we wrote a moment again. Very clean. We can use the included parameter to specify which SSAS database to process. (Note that we’re specifying an ID as opposed to a name. This becomes important a bit later.) Later, we added a parameter for the processing Type, and we could have done the same for the write-back tag. I’ll leave that as an exercise for the reader.

Again, we’ve added a new parameter. Like dimensions, cube objects are children of database obects, and measure group objects are children of cube objects. If we want to process a measure group, all three identifiers are necessary.

Just for the sake of completeness, we also provided a way to unprocess an entire database. (Our lack of budget meant we didn’t have the luxury of significant disk space. Sometimes we just had to bite the bullet and purge a cube.)

This looks just like processing a database, except that it has a different Type. When we later updated our procedures to allow processing type to be specified, this procedure actually went away (or, more accurately, just became a wrapper for the main database processing procedure). Again, that’s an exercise for the reader.

One last XMLA script remains – we need a mechanism by which to rename a database. If you ask Management Studio to generate an ALTER script for an existing SSAS cube, the resulting XMLA is very granular, defining every object in the cube. This makes for some hefty scripts that take a while to generate, and frankly, they’re a waste of time and energy (not to mention server cycles) if you just want to rename a database. After some research, we landed here:

Until now, we’ve been creating XMLA BATCH statements. This is an ALTER statement, similar (conceptually) to the ALTER statements you’re used to writing in T-SQL. Most ALTER statements completely replace the definition of an object. In XMLA, the metadata definition of an object has two parts – properties (high-level descriptors) and content (child objects). We’ve specified that we’re only modifying properties – specifically the ID and Name – which is much faster than redefining the content.

As I mentioned earlier, database IDs and Names are different things, and that plays a critical role in our strategy. We’re only changing the name, partly because the ID is immutable once an SSAS database is deployed.

This is a good time to take a break. We’ll pick up here next time when we implement the swap process.