Month: October 2011

When trying to get search working correctly, the full crawl on our sharepoint2010 server kept coming back with “Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content”. This error is due to MS implementing a loopback check to prevent reflection attacks. To disable the loopback check, do the following:

Open Regedit

Navigate to HKLM\CurrentControlSet\Control\Lsa

Create a new DWord (32-bit) value called DisableLoopbackCheck and set its value to 1

I was trying to use the Invoke-Sqlcmd in a script-block in Powershell to kick off a job, but I kept running into the following error:

“The term ‘Invoke-Sqlcmd’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.”

When you use the Start-Job cmd to start a job, you have to re-initialize the session with the snap-ins or modules that said script block references. I had assumed it would more or less inherit the current profile that the script was run from, which is apparently not the case. Took me a bit to get it all figured, but you have to pass in what you want loaded to the Start-Job in the –InitializationScript parameter. Here’s an example:

Here’s something I don’t do often enough to commit to memory, create your profile. If you have anything you want to run when you fire up powershell (sql server & sqlpsx modules, etc…) you need first to create your profile. You can do so with the following command:

We have this database which is about 300 gig. A substantial portion of this data is logging data, and a substantial portion of the logging data is simply heartbeat records. Why on earth the home page logs to the database is beyond me, but moving on…

There are roughly 280 million records in one table in particular that need to be moved to an archiving database on the same server. Usually, my approach is to BCP out the data that I want to keep and BULK INSERT it back into a new table, but I decided to try a new approach based upon Henk Vandervalk’s article here.

This approach uses multiple streams in parallel in SSIS to SELECT the data out into the new table. This is accomplished by using the MODULO operator against an IDENTITY field in the table to partition out the streams based upon the remainder of the IDENTITY field. Here’s an example:

As you can see in the image below, this partitions the table up into sets based upon the remainder of the ID field divided by the MODULO operator rounded upwards.

So, in SSIS I created a Data Flow Task. In said Data Flow Task I created 4 OLEDB Sources. In the sources is where I put the Sql Select statements that utilize the MODULO operation to divide up the sets of data. I then dragged the output of all 4 to a UNION ALL transformation and then the output of the union all goes to the Sql Server Destination. Of note, you can only use the Sql Server Destination if the destination table is on the local server. If not, you must use an OLEDB Destination.

The result of this was moving 227 million rows in about an hour. Not bad! When doing this via BCP/BULK INSERT, the BCP out alone took over an hour, and the import took another 45 minutes on top of that. This essentially cut down the loading time by about half!

Another thing I did to speed up the transformation was to increase the default packet size on the connections from 4k (Sql Server default) to 32767 (also mentioned in Henk Vandervalk’s excellent post).

Here’s a tip I’ve been using for many years now. To check connectivity from a disparate server to your database server, simply right-click on the desktop and select New, then select Text Document. Name the file test.udl and hit enter. Now simply double-click on the newly created .udl file and a dialog asking for connection information will appear. Fill in the details and hit Test Connection. Quick and dirty way to check connectivity problems.

With our trust being severed, we now have a bunch of windows logins that need to be removed from the untrusted domain. This script will iterate through all the databases and drop all the users specified in the array, then drop the login from the server. Use at your own risk!

I have trust issues. No really, our trust between our domains was severed this morning.

Here is a script that will add logins to the servers, create users in all the user databases on the servers (under the datareader & datawriter roles, but configurable), and then grant view definition on all the procs, views & udf’s on the server.

This script will return you a list of servers with the rowcount of sysjobhistory >=500. Much like the previous post, used for finding servers that have no history cleanup jobs. You will need SQLPSX installed to run this.

Quick powershell script to show servers that don’t have a history cleanup job. This script looks for jobs with the name ‘clean’ and ‘hist’ without ‘distribution’ in the name (to save false positives from the ‘Distribution clean up: distribution’ job, if your server is a distributor). I’m sure there is a better way of checking for this (like checking the count of the msdb.dbo.sysjobhistory table), but this should work just as well (as long as your job is named aptly).

You’ll need SQLPSX installed to run this. Usual warnings, run at your own risk.