The views expressed herein may not reflect the views of my employer, the Joint Chiefs of Staff, or the Council of Elrond. Fnord.

My Dad had a serious medical event which left me scrambling to get to Phoenix. That particular event became more easily understood, but it revealed an even more serious problem which we’re now beginning to develop the full picture. So I’m going to be otherwise occupied for awhile and I won’t be posting here for probably at least another month or more.

I read a post recently saying that updated statistics could prove more important to performance than optimizing indexes. Cool stuff, and possibly much lighter on your server load. If I can find the article that I read, I’ll post a link and/or an excerpt.

Here’s a couple of scripts using the undocumented sp_msforeachtable stored procedure, provided by MS since time began or shortly thereafter.

The results are interesting, you have one result pane for every table with one row for every index. If your table name is part of the index name, identifying what is what shouldn’t be difficult. But what was really interesting was the groups of indexes, including PKs, that the StatsDate field was null even after running an Update Statistics. It took a minute for me to realize that those tables currently had no records. *facepalm*

That ‘?’ functions as a variable substitution for the table name within the SP. Its companion undocumented SP, sp_msforeachdatabase, has a similar wildcard. One problem with sp_msforeachtable is that @command1 only accepts 128 characters, this is probably a carryover of the days that the maximum length of a char/varchar data type was 256 characters, with nchar/nvarchar taking half that. Now in more modern SQL, char/varchar has a max length of 4,000 characters, so it’s not quite the problem that it used to be. And you can find numerous rewrites of these stored procedures that do all sorts of spiffy things, probably also including bypassing this 128 character limit.

Next, update EVERY table’s statistics. This is fine and pretty fast if your system is pretty small — if your system is pretty big, I hope you know what you’re doing before doing something like this!

sp_msforeachtable @command1="UPDATE STATISTICS ?"

Should you update statistics for every table every night, or multiple times every day? The answer to that is left to the reader to research and decide what is appropriate to their installation(s).

If we look at the execution plan, we see a hidden implicit conversion of applying the LEN() function to the float. These implicit conversions can really hurt code execution: take a look at Grant Fritchey’s SQL Server Execution Plans, published by Red Gate.

I love metadata. I think it’s really cool looking at the underpinnings of SQL Server. While cruising through system views this morning, I came across one that I’d never seen before. I’m not 100% certain that it is a Microsoft-supplied view, it’s possible that it was provided by one of the utility packages that I’ve installed.

Since this is my dev server, I’m not surprised these numbers are zero. I think it would be much more interesting to run this on a production server. I suspect that if you have multiple linked servers that you’ll get one row per server, I don’t have the environment to test this.

Here’s the odd bit: now I can’t find the view again! But selecting from that view does return one row. So I turned on Include Actual Execution Plan and the results were intriguing.

The first operator (bottom-most, right-most, was a table scan against a TVF called fn_get_health_status, which is part of the [msdb].[smartadmin] schema.

I’m running SQL Server 2014 Developer Edition, and Help for that TVF says “Smart Admin automates administration tasks on the SQL Server instance using minimal user configuration.” Cool. Definitely a good thing. I’m used to monitoring a lot of things in SQL Server by myself, using tools that I’ve developed over the last 20ish years. And I’d never heard of Smart Admin.

Poking around in Books Online and Google Search I came across this Microsoft page, talking about Smart Admin, talking about “The function is used to report health status of services under Smart Admin.”

Definitely something worth looking in to with the intent of making some utilities to simplify monitoring servers.

The key to this is the constraint. The ID field doesn’t need to be an identity, it can just as not have that parameter as long as you plug in the 1. Users should only have Select permissions on the table so that your application program can see it, when you need to modify any information you do it direct at a higher permission level.

Personally, I like the AppIsLocked field. In my application, the startup menu is actually invisible: it checks if the current user has access to the system and then launches the appropriate menu. But first it checks the AppIsLocked field, and if it is true, doesn’t allow them in to the system at all.

An architect of the SQL Server team, Conor Cunningham, responded to Brent Ozar’s post from last week. In it he states that the information that Brent posted was not entirely accurate.

First, the auto-update issue. Conor says that when you run the setup program, if the system is connected to the internet, it will look for certain critical hotfixes and download them for you. They had problems with the VC runtime. Otherwise, any patching after installation is controlled by the server administrator through the usual Windows Update methodology.

He also notes that SSMS 2016 is now a discrete product and has monthly update pushes that may not correspond to updates for the actual database engine or other DB components.

Next, the ‘phone home’ issue. It appears that it is indeed opt-out, but it’s easy to configure to not send telemetry UNLESS you’re using the free version. I’m unclear whether that’s just the SQL Express edition or if it would include the Developer edition. He also states that this telemetry can easily be blocked by firewalls.

The full text of Conor’s reply is below this cut, and this link will take you back to the original post.

Interesting problem. I’m loading up some data in Access that includes a memo field that could potentially contain many K of information. I created a report and eventually noticed that not all of the memo field was being outputted! Very odd.

A quick Google search found a post by Allen Browne that described the issue and the solution. Simply put, I had noticed that Access had added an @ format for the field in question when I pasted to create the table from Excel. The @ format copied down to the report level. That code, and several other conditions as described by Allen, will cause Access to truncate the memo output to 255 characters to save CPU effort.

*sigh*

I guess it makes sense, 20 years ago, when CPUs were considerably weaker. Now? I’m not so sure that it’s needed.