Archive for: June 20th, 2016

This lock class can best be appreciated by comparing it to a mutex. Like the mutex, a reader-writer lock can only be acquired in exclusive mode by one requestor at a time, but instead of only exposing this exclusive-acquire (Writer) option, it alternatively allows acquisition in shared (Reader) mode. This stuff is completely natural to us database folks of course, because the semantics is a subset of the behaviours we get from familiar database locks.

Basic rules of the road:

Any number of simultaneous “clients” can share ownership of the lock in Read mode.

Readers block writers.

Writers block readers and other writers.

Blocking means that the requesting worker gets suspended (scheduled off the processor) and accrues a wait of a type specified in the lock acquisition request.

There’s a huge amount of detail here, and I for one am glad that there isn’t a quiz later..

Previously an Data Lake Analytics account was limited to a total of 60 units on parallelism available to 3 concurrent U-SQL jobs. And each U-SQL job was limited to only using a maximum of 20 units of parallelism. However, sometimes developers want to run a U-SQL job that uses more parallelism. For example: they might want two concurrently running jobs that each use 30 units. Today, we are removing the per-job limit. Now you can concurrently run jobs that use any amount of parallelism as long as the total for the running jobs doesn’t go beyond the maximum for your account (currently 60 units of parallelism).

If you are already running U-SQL jobs, there’s no need to worry. This change doesn’t require any action on your part. Your jobs run just as they did before. But now, if you want to you can take advantage of all the parallelism in your account.

This doesn’t change the available performance units, so there’s no billing change.

Did you know a single backup file can contain multiple database backups? When you backup a database to a file, if that file already exists, then by default the backup will be appended to the existing file. Causing the file to increase in size. If it’s the same database (and yes you can have a single file containing backups from multiple databases) then the file size will double or more. This behavior is controlled by the INIT/NOINIT clause of the BACKUP DATABASE command. NOINIT (the default) tells SQL to append the new backup to the existing file. INIT tells SQL to overwrite the existing backup files. Note the header of the file is not initialized.

Anyway, i’ve combined these two pain points to create a BIML routine that uses EPPlus to output multi-sheeted Excel spreadsheet reliably and fast.

At the moment its very basic , take SQL statements and output the data to an excel file, but in time i will be hoping to create some meta data to start ‘getting the crayons out’ and making them look a bit more pretty.

4. Azure SDK

The Azure SDK sets up lots of libraries; the main features we are looking for from the Azure SDK right away are (a) the ability to use the Cloud Explorer within Visual Studio, and (b) the ability to create ARM template projects for automated deployment purposes. In addition to the Server Explorer we get from Visual Studio, the Cloud Explorer from the SDK gives us another way to interact with our resources in Azure.

One article in this series explained that new databases created in SQL Server 2016 will use “Indirect Checkpoint” by default. Indirect checkpoint was added in SQL Server 2012, but has not previously been enabled by default for new databases. The article emphasizes this point:

You can see that our problem query is incredibly easy to find in the top left window based on total duration. Also notice that in the top right Plan summary window, there is currently only one available plan for the query (plan_id 49).

We need to figure out how we can get our ‘good plan’ using Trace Flag 9481 as an available plan that we can force using the Query Store.

The cardinality estimator change in SQL Server 2014 wasn’t perfect, but when you can fix individual plans like this, it makes SQL Server much more powerful.

So the server had plenty of free RAM. But NUMA node 1 was in a pinch. And SSIS spooled its buffers to disk. Doggone it.I guess I’d figured that notifications were sent based on server-wide memory state. But I guess maybe memory state on each NUMA node can lead to a memory notification?The target SQL Server instance, a beefy one, was also on this physical server. There’s 1.5 TB of RAM on the server. 🙂

It also looks like the easiest fix is something which was deprecated in Windows Server 2012 R2.