First of all, never use this in a production environment! This script is to backup your transaction logs to a “nul device” for all online databases which are not using simple recovery model. In windows it is indeed spelled “nul” with one “L”. The only reason you would want to do this is if you have a non production environment using full recovery model and this server architecturally mirrors your production environment. For example, we have a “staging” server that is used for testing our code changes before they go into production. We require the staging environment to be as close to production as possible and have scheduled scripts that sync them weekly. In this scenario, we have many databases in the staging server that are using full recovery model but we do not want to backup the t-logs, we would rather just throw them away.

Give access to the working directory to R

Locate and open “rlauncher.config” file in a text editor. This file will be under the “<sqlserver_instance>\binn” directory. Take a look at the location of WORKING_DIRECTORY. This should have a “short name” file path. The path should be something like “<sqlserver_instance>\EXTENS~1”, and “\EXTENS~1” is equivalent to “\ExtensibilityData”. We need to give access to R to this folder. I did this by changing the permissions to full control to everyone. You may want to be more restrictive here, but in my case this did not matter.

Right click folder > Properties > Security tab > Advanced > Add

Select a principal (I entered “Everyone”)

Tick “Full control” under basic permissions and click “OK”

Tick “Replace all child object and permissions entries with inheritable permissions entries from this object” and click “OK”

Now if you rerun the script above you should get a result of “hello, 1”.

Weighted averaged, also known as weighted arithmetic mean, is similar to an ordinary average, except that instead of each of the data points contributing equally to the final average, each data point is “weighted” and thus contributes more or less depending on the given weight. The weight would typically be some correlated data point that indicates significance of the value being averaged.

For example, let’s say we were tracking the progress of a project and its various tasks. Our data set includes the task number, percent completed, and estimated hours to complete the task. We want to calculate an overall percent completed for the project based on how complete the individual tasks are. Take a look at the example below.

=AVERAGE(B2:B6)

If we calculate a straight average of the percent complete column then we get 75% completed overall. However, this could be deceiving because some tasks will take longer to complete than others, as indicated in the estimated hours column. Let’s use this estimated hours column as the “weight” in our weighted average.

One way to do this would be to:

multiply the item (% Complete) by the weight (Estimated Hours) at the row level, shown in column D below.

sum up all of those products in column D, and then

divide by the sum of the weight for all records (column C)

Formulas:

Results:

The result of the weighted average is 49%, which is much different than the 75% from a straight average. This is because there are items which have a high percent completed but a low estimated hours to complete, and also items with low percent completed and higher estimated hours to complete. By including a weight to factor in the level of effort for each item you get a much more accurate result.

What’s the problem with this approach? If the number of tasks changes then it becomes a fairly manual task to adjust the rows and formulas accordingly. Also, I don’t like the idea of having a “helper” or “work” column inserted into the data set. There is a quicker and simpler way to calculate the weighted average than the method I just explained.

I had a need to concatenate and comma separate some multi-row data into an array of values with each having an unknown number of elements, in other words, take a many to one parent-child relationship and collapse the many child rows into the one parent record and separate the child record values with a comma. In the past, my default method to solve this problem was to build a temporary table and then use a loop to iterate through a data set and append the elements by updating rows in the temporary table, or use a common table expression with anchor and recursive members. Recently I stumbled upon the “stuff” and “for xml” functions. I had seen these functions before but never took the time to understand their potential use. These function can be used to solve the problem mentioned.

T-SQL For XML (Path Mode)

Function Description: A SELECT query returns results as a rowset. You can optionally retrieve formal results of a SQL query as XML by specifying the FOR XML clause in the query. The FOR XML clause can be used in top-level queries and in sub queries. The top-level FOR XML clause can be used only in the SELECT statement. In sub queries, FOR XML can be used in the INSERT, UPDATE, and DELETE statements. It can also be used in assignment statements.

In a FOR XML clause, you specify one of these modes: RAW, AUTO, EXPLICIT, PATH. We will only use PATH for this exercise.

The PATH mode together with the nested FOR XML query capability provides the flexibility of the EXPLICIT mode in a simpler manner.

The EXPLICIT mode allows more control over the shape of the XML. You can mix attributes and elements at will in deciding the shape of the XML. It requires a specific format for the resulting rowset that is generated because of query execution. This rowset format is then mapped into XML shape. The power of EXPLICIT mode is to mix attributes and elements at will, create wrappers and nested complex properties, create space-separated values (for example, OrderID attribute may have a list of order ID values), and mixed contents.

I will not list the syntax for this function because it can get pretty complex very quickly for all of the options. Instead, you can see in the example I just use it to concatenate the rows into a comma separated array. Any other XML is basically ignored by passing in the argument “Path (”)”.

T-SQL Stuff Function

Function Description: The STUFF function inserts a string into another string. It deletes a specified length of characters in the first string at the start position and then inserts the second string into the first string at the start position.

“Exciting news! Starting today, SQL Server 2014 Developer Edition is now a free download for Visual Studio Dev Essentials members (you will be prompted to sign in to Visual Studio Dev Essentials before you can download SQL Server 2014 Developer Edition). We are making this change so that all developers can leverage the capabilities that SQL Server 2014 has to offer for their data solution, and this is another step in making SQL Server more accessible. SQL Server Developer Edition is for development and testing only, and not for production environments or for use with production data.”

“SQL Server 2016 Developer Edition, when released later this year, will also be free. To learn more about the exciting new features in SQL Server 2016, read the datasheet.”

I have been struggling with creating a user who has read only access to an azure database and finally figured it out. The key is connecting to the right database in a new session when executing the t-sql commands. This is because you can not execute a “use” statement, change the database in the current session via the GUI, and you can not make named references to databases in your t-sql.

server_name.database_name.schema_name.object_name

Transact-SQL

1

2

3

4

5

6

7

8

9

--OPEN NEW SESSION TO MASTER DB

CREATELOGINReadOnlyLogin

WITHPASSWORD='ENTER STRONG PASSWORD'

;

CREATEUSERReadOnlyUser

FORLOGINReadOnlyLogin

WITHDEFAULT_SCHEMA=dbo

;

Transact-SQL

1

2

3

--OPEN NEW SESSION TO YOUR AZURE DB

CREATEUSERReadOnlyUserFROMLOGINReadOnlyLogin;

EXECsp_addrolemember'db_datareader','ReadOnlyUser';

Validate user has read only permissions by connecting to database using these new credentials and executing something like “CREATE TABLE dbo.test (test datetime);”, which should return an error message indicating permission denied.

I wanted to compare WiFi coverage of two different wireless access points and their various configurations in my house, and also check for dead spots. One quick and cheap way to do this that I have come across is using a free product called Ekahau HeatMapper.

Ekahau HeatMapper allows you to walk around your home or office and capture information about WiFi signals in various spots, and then the software aggregates this information into a banded topographical map.

This code will drop the login and users for all databases for a provided user name. and reports on its progress. This only works where the user and login match. This script does not check if users own any objects.