Everyday SQLhttps://www.patrickkeisler.com
Saving DBAs time and effort in their everyday tasksThu, 16 May 2019 05:34:48 +0000en-UShourly1https://wordpress.org/?v=4.9.10/wp-content/uploads/2019/04/cropped-azuresqlsquaretransparent1-32x32.pngEveryday SQLhttps://www.patrickkeisler.com
3232Management Studio Dark Themehttps://www.patrickkeisler.com/2019/05/management-studio-dark-theme/
https://www.patrickkeisler.com/2019/05/management-studio-dark-theme/#respondTue, 14 May 2019 08:46:40 +0000https://www.patrickkeisler.com/?p=3029[...]]]>There is still no love for SQL Server Management Studio. SSMS v18 Beta included a Dark Theme, but somehow that did not make it into the final release of the product. Other apps have this capability (Visual Studio, Visual Studio Code, Chrome, Edge) but not SSMS. If you are like me, sometimes you want to switch your apps to a dark theme for added flair. A quick web search will reveal several ways to hack SSMS to display a dark theme, but that may not produce the best results. So, until Microsoft decides to include it, here is how I have been customizing SSMS for the last several years.

Before:

After:

These colors may not work for everyone, but it does demonstrate how you can customize SSMS to fit your tastes.

To get started, from the menu select Tools and then Options.

In the Options window we’ll need to select Environment and the Fonts and Colors.

For Show settings for, select Text Editor and change the font size to 12.

Under Display items, select Plain Text and then click the Custom button next Item foreground.

In the Color window, change the Red to 228, Green to 228, Blue to 228, and then click OK.

Under Item background, select Black from the dropdown list.

Follow the same steps to change each of these other items below.

Text Editor:

Font size: 12

Display Items:

Plain Text

Foreground: 228,228,228

Background: Black

Selected Text

Background: Yellow

Line Number

Foreground: 255, 255, 128

Comment

Foreground: 255, 175, 45

Keyword

Foreground: 127, 176, 228

SQL Operator

Foreground: 199, 199, 199

SQL Stored Procedure

Foreground: 128, 255, 128

SQL String

Foreground: Yellow

SQL System Function

Foreground: 220, 120, 255

SQL System Table

Foreground: 128, 255, 128

String

Foreground: Yellow

URL Hyperlink

Foreground: Cyan

Next, select Grid Results from the Show Settings For dropdown list. Change the font to Microsoft San Serif and the size to 12.

Then select Text Results from the Show Settings For dropdown list and change the font to Consolas and the font size to 12.

Changing the Grid or Text Results will require you to close and reopen SSMS for the changes to take effect. The last thing is to display the line numbers. On the left side of the Options window, select Text Editor, Transact-SQL, and then General. Click the check box next to Line numbers and click OK.

Once you have the colors you want, then go to Tools and click Import and Export Settings.

This wizard allows you to save all the customization settings to a settings file. The file can be used as a backup, or you can use that file to import to SSMS running on another workstation.

Happy customizing!

]]>https://www.patrickkeisler.com/2019/05/management-studio-dark-theme/feed/0SQL Server Audit Recipe – xp_cmdshellhttps://www.patrickkeisler.com/2018/07/sql-server-audit-recipe-xpcmdshell.html
https://www.patrickkeisler.com/2018/07/sql-server-audit-recipe-xpcmdshell.html#respondTue, 17 Jul 2018 08:36:00 +0000http://www.patrickkeisler.com/2018/07/sql-server-audit-recipe-xpcmdshell/[...]]]>This article assumes you already have a basic understanding of SQL Server Audit, but if not, use this link to catch up on all the details.

Are you required to have xp_cmdshell enabled on one of your servers? If so, then setup a SQL Audit now to track its use. Never mind the implications of enabling xp_cmdshell, as a DBA you are responsible for what happens on your servers and tracking the use of xp_cmdshell should be a priority.

The first step is to create a server audit to hold the events that we collect.

You’ll notice that we added a WHERE clause that instructs the audit to only collect events that reference the object xp_cmdshell. All other events will be ignored.

Next, we need to create a server audit specification using the
SCHEMA_OBJECT_ACCESS_GROUP. This server-level action group is triggered when a permission is used to access an object such as xp_cmdshell.

Viewing the audit log, you can clearly see the command that was executed and the login that called it.

As we have seen, if you have a server that has xp_cmdshell enabled, then using SQL Audit can help you keep track of the commands that have been executed.

Everything we have covered here will work in all editions of SQL Server 2012 and above.

]]>https://www.patrickkeisler.com/2018/07/sql-server-audit-recipe-xpcmdshell.html/feed/0SQL Server Morning Health Checkshttps://www.patrickkeisler.com/2018/01/sql-server-morning-health-checks.html
https://www.patrickkeisler.com/2018/01/sql-server-morning-health-checks.html#respondTue, 02 Jan 2018 12:00:00 +0000[...]]]>Every now and again as a Microsoft PFE, you get a chance to make a big difference for a customer. One such occasion happened just recently when I was asked to help find a way to automate the daily checks the DBA had to perform every morning. The result was a PowerShell script that reduced that manual task down from an hour to less than a minute.

]]>https://www.patrickkeisler.com/2018/01/sql-server-morning-health-checks.html/feed/0Create a Whitelist for SQL Server – UPDATED!!!https://www.patrickkeisler.com/2017/05/create-whitelist-for-sql-server-updated.html
https://www.patrickkeisler.com/2017/05/create-whitelist-for-sql-server-updated.html#respondTue, 16 May 2017 10:00:00 +0000[...]]]>A while back, I posted an article about creating a WhiteList for access to SQL Server. Since then I have received a bit of feedback that it was not working as designed. My apologies for taking so long, but I believe I have come up with a fix.

The main issue is the trigger will block some or even all access to the server after it’s created. As it turns out, the issues were really permission being denied. To see it in action, let’s create everything using the original code from here.

We’ll add 1 row to the WhiteList table should allow all users from the workstation, ECHOBASE1, access, regardless of its IP address.

Next, we’ll create a SQL login with only connect permission to the server and nothing else.

CREATE LOGIN LogonTriggerTest WITH PASSWORD = 'Password1';
GO

Finally, we’ll open a new query window using that login.

As you can see, we are denied access to the server because of the logon trigger. If we look in the Errorlog, we can see that we lack the VIEW SERVER STATE permission.

This was my first mistake. I did my initial testing using an administrative login. Most users are not admins; therefore, they will not have the permission required to view sys.dm_exec_connections. I was using this DMV to get the IP address of the client connection, and it requires the VIEW SERVER STATE permission. To get around this, I can use the CONNECTIONPROPERTY function, as it does not require any additional permissions.

SELECT CONNECTIONPROPERTY(‘client_net_address’);
GO

Now let’s try to connect again.

Again, we failed. This would be my second mistake. I failed to grant SELECT access to the WhiteList table. By default, a user is will have public permission to the master database, but no permission to the table. To solve this, we can grant permission to the public database role. This will allow any authenticated user to read from the WhiteList table.

USE master;
GO
GRANT SELECT ON dbo.WhiteList TO public;
GO

Finally, our connection to SQL Server is successful. Using the same code from the trigger, we can compare it what’s in the WhiteList table.

If I had followed my own rules, I could have discovered most of these issues before posting the original article.

The fully updated code is below. Please let me know if you run into any other issues with this new version. I also added another column to the WhiteList table that can be used for hold comments. The idea is to provide some documentation about what the white-listed item is attempting to do.

]]>https://www.patrickkeisler.com/2017/05/create-whitelist-for-sql-server-updated.html/feed/0Dear McAfee, Why are you destroying my buffer pool?https://www.patrickkeisler.com/2017/05/dear-mcafee-why-are-you-destroying-my.html
https://www.patrickkeisler.com/2017/05/dear-mcafee-why-are-you-destroying-my.html#respondTue, 02 May 2017 10:00:00 +0000[...]]]>For the past few months, one my customers had been trying to diagnose an issue with SQL Server paging out the entire buffer pool each morning around 8AM. See the error below.

Not only was this causing some serious performance issues, but it was also affecting every SQL Server in the environment.

I began my investigation with the standard troubleshooting techniques for buffer pool paging. One of the main workarounds that’s recommended is to enable the “locked pages in memory” permission for the SQL Server service account; however, this was not possible for this customer. Enabling that permission would violate one of their compliance rules, so I needed to find another solution.

The next stop on the investigation trail is the sys.dm_os_ring_buffers DMV. This is a great tool to help determine if the memory pressure came from within SQL Server or from the Windows OS.

WITH RingBuffer AS (
SELECT
CAST(dorb.record AS XML) AS xRecord
,dorb.timestamp
FROM sys.dm_os_ring_buffers AS dorb
WHERE dorb.ring_buffer_type = 'RING_BUFFER_RESOURCE_MONITOR'
)
SELECT
DATEADD(ms, rb.timestamp-dosi.ms_ticks, GETDATE()) AS 'DateTime'
,xr.value('(ResourceMonitor/Notification)[1]', 'varchar(75)') AS RmNotification
,xr.value('(ResourceMonitor/IndicatorsProcess)[1]','tinyint') AS IndicatorsProcess
,xr.value('(ResourceMonitor/IndicatorsSystem)[1]','tinyint') AS IndicatorsSystem
FROM RingBuffer AS rb
CROSS APPLY rb.xRecord.nodes('Record') record (xr)
CROSS JOIN sys.dm_os_sys_info AS dosi
ORDER BY 'DateTime' DESC;

Matching the time of our Errorlog entry, we can see the resource monitor reported RESOURCE_MEMPHYSICAL_LOW with an IndicatorsProcess of 0 and IndicatorsSystem of 2. This helps us determine that SQL Server was not the cause of memory pressure and that the issue was caused by another process running on the server. Since these servers are dedicated to SQL Server, I know there are very few applications installed that should be competing with SQL Server for memory.

The next place to check was the Windows event logs. In the Application Log, I found the buffer pool entry by SQL Server (EventID 17890), and then proceeded to search backwards in time. A few minutes before SQL’s error, I noticed this entry.

What is the McShield service? A quick Bing search revealed that it’s one of the services for McAfee VirusScan Enterprise. Could this be the cause? To get a quick look at all the history, I filtered the application log for event IDs: 17890 and 5000. Each time McAfee got an updated virus DAT file, SQL Server soon followed that by paging out the entire buffer pool. I checked the application log on several other SQL Servers for the same event IDs, and sure enough the same events occurred in tandem each morning. I also got confirmation from the security administration team that McAfee is scheduled to check for new DAT files each morning around 8AM. Eureka!

This seems like it could be the cause of our paging, but a little more research is needed. Searching the McAfee knowledge base, lead me to this article about the “Processes on enable” option.

Enabling this option causes memory pages of running processes to get paged to disk. And the example given is “Oracle, SQL, or other critical applications that need to be memory-resident continually, will have their process address space paged to disk when scan Processes On Enable kicks in”. OUCH! So when the McAfee service starts up or it gets a new DAT file, it will page out all processes.

To verify this was really the cause, I needed to setup a test environment. I installed McAfee VirusScan Enterprise (VSE 8.8 Patch 7) on a Windows 2012 R2 server running SQL Server 2016, and then setup HammerDB to generate some activity against the server.

I configured Performance Monitor to capture the “Process:Working Set” and “Process:Page Faults/sec” for the SQL Server process (sqlservr.exe). Once HammerDB had a little bit of time to get running, I manually issued a DAT file update through the McAfee GUI.

Once completed, I checked the Windows Application Log and I could see event ID 5000 that said McShield service had restarted with the newer DAT version 8506.

The McAfee knowledge base article said this would happen when a new DAT file is received or when the McShield server starts up, so I decided to manually stop and restart the McShield service. As expected, I saw the same two event IDs in the application log. Below is the data collected by Performance Monitor for both of those tests.

The red line is the Working Set of SQL Server and the blue line is the Page Faults/sec for SQL Server. The first dip in the red line was from the DAT file update, and the second dip was when I manually restarted the McShield service. Since SQL Server’s buffer pool had just been paged out, it needed to read those pages from disk which results in a page fault.

Now that we know the exclusions were set properly, we can assume the issue is caused by the “Processes on enable” setting as previously mentioned. Let’s turn off that setting and rerun our tests

After giving HammerDB a few minutes to generate some activity, I issue a manual DAT file update through the McAfee GUI. The application log shows the Event ID 5000 with a new DAT version of 8507, but no record of Event ID 17890 and there was nothing in the SQL Errorlog about memory paging.

Next, we’ll test the manual restart of the McShield service. And we get the same result; Event ID 5000 but no indication of memory paging. Looking at perfmon, the counters show SQL Server’s Working Set and Page Faults were unchanged during the tests.

Once we completed these tests, we took that data back to the security team and asked them to turn off that setting within McAfee. They obliged and so we waited and monitored our SQL Servers for the next few weeks. During that time, we did not have any SQL Server page out memory during the times that McAfee was getting new DAT updates.

After doing a little more research on McAfee VSE, I discovered this setting “Processes on enable” is enabled if you install McAfee with the Maximum Protection level. The Standard Protection level does not have this option enabled.

The key takeaway from this whole experience is that even though a program may be a standard install for all servers doesn’t mean it won’t impact SQL Server. And even though it may have been fully vetted in the past, a change to its settings could impact SQL Server in the future.

The lab tests were done using McAfee VirusScan Enterprise 8.8 Patch 7. I know this behavior is still the same in VSE Patch 8, so I would assume it’s the same in the other patch levels as well.

]]>https://www.patrickkeisler.com/2017/05/dear-mcafee-why-are-you-destroying-my.html/feed/0Blob Auditing for Azure SQL Databasehttps://www.patrickkeisler.com/2017/04/blob-auditing-for-azure-sql-database/
https://www.patrickkeisler.com/2017/04/blob-auditing-for-azure-sql-database/#commentsTue, 18 Apr 2017 11:10:44 +0000http://azure.patrickkeisler.com/?p=2001[...]]]>In February 2017, Microsoft announced the general availability of Blob Auditing for Azure SQL Database. While auditing features were available before in Azure, this is a huge leap forward, especially in having more granular control over what audit records are captured.

Before Blob Auditing, there was Table Auditing. This is something I like to equate to the C2 auditing feature of SQL Server. It’s only configurable options were ON or OFF. In reality, Table Auditing has a few more controls than that, but you get the idea. There was no way to audit actions against one specific table. Blob Auditing provides us with that level of granularity. However, controlling that granularity cannot be accomplished through the Azure Portal; it can only be done with PowerShell or REST API.

In the image below, you can see that Blob Auditing is on, but we can not see what actions are being collected.

We can see there are three action groups listed: SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP, FAILED_DATABASE_AUTHENTICATION_GROUP, and BATCH_COMPLETED_GROUP. Without even looking at the documentation, we can assume that we are auditing both successful and failed logins, as well as all successful batches against this database. When you compare the Azure action groups side-by-side with the box product, they line up almost exactly.

So how do we customize it further? Well let’s say our auditing requirements only need to capture changes to structure of the database; for example, an ALTER TABLE. First, we need to remove BATCH_COMPLETED_GROUP and add DATABASE_OBJECT_CHANGE_GROUP. To accomplish this, we will use Set-AzureRmSqlDatabaseAuditingPolicy.

To verify the changes were successful, we run Get-AzureRmSqlDatabaseAuditingPolicy again.

Now, we’ll be able to collect audit records anytime a CREATE, ALTER, or DROP is executed against a database object. However, let’s say we need something more granular. In our sample database, we have a table that stores salary data and we need to audit anything that touches it. We ae already covered with schema changes by the action group, DATABASE_OBJECT_CHANGE_GROUP, but that doesn’t audit DML changes. Adding BATCH_COMPLETED_GROUP would capture what we need, but that would cover all tables and we have a requirement for just one. This is where we can audit actions on specific objects. In the statement below, we just add an audit action for SELECT, INSERT, UPDATE, and DELETE on the Salaries table.

To verify the changes were successful, we run Get-AzureRmSqlDatabaseAuditingPolicy again.

If you have multiple objects or actions to audit, then just separate them with a comma, just like the AuditActionGroups parameter. The one key piece to remember is you must specify all audit actions and action groups together with each execution of Set-AzureRmSqlDatabaseAuditingPolicy. There is no add or remove audit item. This means if you have 24 actions to audit and you need to add one more, then you have to specify all 25 in the same command.

Now let’s run a few queries to test the audit. First, we’ll run a simple select from the Salaries table.

SELECT * FROM dbo.PlayerSalaryByYear
WHERE Player = 'Jones, Chipper';
GO

Back in the Azure Portal, click on the view button so we can view the captured audit records for the statements we just executed.

What is displayed is one line item for each audit record captured.

Selecting each record will open another blade with the contents of that record. In our example, we have one for the initial SELECT against the table, one for the CREATE VIEW statement, and one for the SELECT against the view which references the Salaries table.

While using the Azure Portal is a quick and easy way to view audit records, the best way to consume the records for reporting is to use the function, sys.fn_get_audit_file(). This is the same function used in the box product. The only difference is we need to specify the Azure URL for the audit log. All audit logs are stored in a container named sqldbauditlogs. In side that container, additional containers server name, database name, and a date/time stamp are created to further organize it. This is something to keep in mind if you plan to programmatically process the audit records.

As of this writing, there are two DMVs missing from Azure SQL Database: sys.dm_audit_actions and sys.dm_audit_class_type_map. These DMVs allow us to translate the actions_id and class_type values into a readable description. Since they are not available in Azure, I have created my own version of those as user tables within my database: dbo.audit_actions and dbo.audit_class_types. This allows me to join them against the audit function to produce a better report.

If you are familiar with auditing in the box product, then you might be aware that common properties like client hostnames (or IP address) and application names are not captured for each audit record; however, in Azure they are collected and viewable in the columns client_ip and application_name. See the picture above.

Next, let’s create a stored procedure that selects from that view and add an EXECUTE audit action for it.

]]>https://www.patrickkeisler.com/2017/04/blob-auditing-for-azure-sql-database/feed/1AzureRM Module Versionhttps://www.patrickkeisler.com/2017/04/azurerm-module-version/
https://www.patrickkeisler.com/2017/04/azurerm-module-version/#respondThu, 13 Apr 2017 16:24:53 +0000http://azure.patrickkeisler.com/?p=1751[...]]]>When working with the AzureRM PowerShell module, remember the module is constantly being updated to take advantage of new features added to Azure. To check the version of the AzureRM module currently installed on your system, use the following command.

Get-Module AzureRM -List | Select-Object Name, Version, Path

The screenshot below shows I am running version 1.4.0 and the output of Get-AzureRmSqlDatabaseAuditingPolicy command-let.

Next, I install the latest AzureRM module.

After the install is complete, I rerun the same commands. The screenshot below shows that I am now running version 3.8.0 followed by the same Get-AzureRmSqlDatabaseAuditingPolicy command-let.

Notice the difference in the output? There are several differences, but in the newer version there properties for AuditAction and AuditActionGroup which were just recently added to Azure.

Everyone has goals or at least they should have goals. As a young professional out of college, I had plenty of goals, but none of them involved SQL Server. I really had no idea of where I wanted my career to go other than working in IT. Finally in 2000, I stumbled upon an opportunity that started me down the career path of a DBA. It didn’t take long to realize I enjoy this type of work. Over the next decade, I learned a lot from my coworkers, training classes, and especially my mistakes. Yes, I really did drop the master database by mistake.

But one thing I didn’t take advantage of was PASS and its huge community of SQL professionals. I knew about some of events, but it wasn’t until 2012 that I got heavily involved. That decision forever changed my career. Since then, I started a blog about SQL Server, attended PASS Summit 2013, attended SQL Cruise 2014, started speaking at SQL user groups and SQL Saturdays, started working at Microsoft, attended SQL Cruise 2015, and most of all met a huge group of people that enjoy working with SQL Server as much as I do.

Eventually, I set a goal to be a speaker at PASS Summit. Well this past week I received notification that I had been chosen as a speaker for PASS Summit 2016. Upon hearing the news, I was extremely excited and humbled, especially considering there are far better speakers with more entertaining topics out there.

There are a lot of people that helped me along the way, but I’d like to really say thank you to my friend Andrew Kelly (b|t). Ever since I met him on board SQL Cruise, he has helped improve in the areas of public speaking, designing presentations, writing abstracts, code-review, and just an all-around great mentor.

However, this good news could not have come a worse time. 2016 has been a rough year for my family’s health and it got substantially worse in early September when my wife was diagnosed with cancer. Luckily, doctors caught it very early and her prognosis is good, but we’re still not taking any chances.

My good friend Scott, who is a cancer-survivor, told me “cancer is a radical disease and must be treated in radical ways” and to be prepared for any type of news throughout the treatment. I’m confident my wife can beat this and make a quick recovery, but until then I’ll have to postpone speaking at PASS Summit. While I enjoy spending time with all of my friends within the SQL community, I love my wife and must focus all of my time to her recovery. There will always be time for another Summit.

]]>https://www.patrickkeisler.com/2016/09/a-goal-of-speaking-at-pass-summit.html/feed/0SQLPSX is Finally Getting Updatedhttps://www.patrickkeisler.com/2016/08/sqlpsx-is-finally-getting-updated.html
https://www.patrickkeisler.com/2016/08/sqlpsx-is-finally-getting-updated.html#respondTue, 16 Aug 2016 11:00:00 +0000The most current code is now on Github, with the Codeplex version being depreciated. You can read all about the planned updates from Mike Shepard.

The host for T-SQL Tuesday #79 is Michael J. Swart (b|t), and his topic of choice is to write about the new release of SQL Server 2016.

This past weekend I attended the SQL Server 2016 Launch Discovery Day in Raleigh, NC. I have attended several SQL Server launch events over the years, but this one was quite different. While it wasn’t a real launch event, it was marketed as a hands-on event to learn about the newest release of SQL Server. The hands-on part for everyone to breakup into teams of five to solve a pre-determined problem. Basically, this was hackathon for SQL Server, and something I’ve never done before.

We started early in the morning with a few presentations about SQL Server 2016. Kevin Feasel (b|t) spoke about the new features, and SQL Server MVP Rick Heiges (b|t) spoke about a real-world win using columnstore indexes in SQL Server 2016. Just before our lunch break the hack was revealed; the basics of which were simple.

First, you are given a set of data set; just five tables containing the following information.

From how far away do attendees travel to SQL Saturday? Are there any geographic trends to the distances traveled?

Does the SQL Saturday session data provide any insight into what sessions are chosen for Summit? Are there any trends in session or topic content that can be established?

Are there are geographical insights that can discerned? Do the sessions presented at SQL Saturday help predict popularity or selection of sessions at Summit?

Does virtual chapter member data provide any insights into PASS membership growth?

The judges score each solution based on the following criteria.

Use of new features in SQL Server 2016

Usefulness of the dashboard in Power BI/visualizations for the community

Completeness of the solution

Innovativeness of solution

Bonus points for mobile friendliness

Your solution must be completed in 3 hours.

On paper this all sounds pretty easy, but in practice it was quite hard. I am no BI developer and the other members of my team did not have any expertise in that area either, but we still managed to create a solution and have fun doing so.

The first issue was had was how to combine our development work on the same database. This one was easy…just use Azure. In the span of about 30 minutes, I spun up a new Azure VM with SQL Server 2016 pre-installed, uploaded the database, setup logins, and opened the appropriate ports. I then gave my team members the URL and credentials so they each could connect from their laptops.

One of my team members, Dhruv, wanted to get SQL Server R Services installed to analyze the data set. Machine learning was his specialty, and since R Services is a new feature for SQL Server 2016, we thought this would be a good place to start. However, this proved to be mistake for our team. We spent way too much time trying to get it setup. This was mainly do to the the need to install the R components without an internet connection, or I should say a slow connection. I wish we could have implemented this, because Dhruv had some really good ideas. Even without R Services, he was able to create some nifty reports using Power BI.

One of my other team members, Mike, spent a lot of time trying to understand the data, and how to query it to produce the data for our reports. I’m not sure if this was by design, but let me just say the data set was horrible and every team complained about the time needed to clean it up to the point of being useful. Either way, it was just one of the many problems that we needed to overcome. Most of Mike’s code was used in the Power BI dashboard that Dhruv created, but he was also able to write some code that made some good points about membership growth potential; however, we did not have time to build a report for it.

Our team (Team Tiger) finished our solution and presented to the group, but it was clear from the other presentations that we had been over matched. The winning solution was from Team Cheetah who had some unique insights into the data and designed their reports to reflect that detail. Not to mention, their presentation went into a lot of detail about what they had learned.

I really liked the entire event, but I wish that we had more time to work on the solution. Three hours seems like a lot, but after thinking about the challenge for a few days, there were so many more ideas that I came up with. For starters, one of the areas to score points was the use of new SQL Server 2016 features. That basically translates into ANY new feature whether it helps you produce a better looking report or not. With that in mind, I could have done the following.

Enabled Row-Level Security so we could the reports show different data based on which user is running it.

Spin up additional VMs in different Azure datacenters so I could create a load-balanced Availability Group that would provide high availability and better response time for users closer to each datacenter.

While none of these things would have improved our presentation using Power BI, they are tasks that could have been easily implemented by me given my skillset. And by implementing them it would have definitely scored us a lot more points for the use of new SQL Server 2016 features. This is the big lesion that I learned from the event…always play to your strengths. Don’t try to learn a new skill in a few hours, just use the ones you already have. It will be a much better use of your time, and will most likely produce a better end result.

As I said, this was my first hackathon of any kind, but now I can’t wait to attend another one; especially one that deals with SQL Server.