Category Archives: SQL Server

As expected, Brent “Master” Ozar (Blog | Twitter) already scooped the story, but it never hurts to help seed Boogle searches for “SQL Server 2008 R2 release date”. Well now it’s official: May 21, 2010.

Recently I had the pleasure of going up to Microsoft for an R2 event with Donald Farmer (Blog | Twitter) which I’ll be blogging about soon. At this event they did an overview of the new R2 features and got our thoughts about it. Personally, I think PowerPivot is a really awesome tool that will really help put the power of business intelligence in the end user’s hands in an easy way. Another feature that I get more excited about the more I learn about it is the new StreamInsight feature. I’d still like to test it some and read more about it but I can see some very cool things coming out of that feature.

Update via Brent’s site (reposted with permission):

Update: maybe not May 21. I’m hearing secondhand (nothing NDA) that either the audience heard the dates wrong, or Microsoft announced it wrong. I’m showing a couple of tweets below from audience members to support that it was probably the latter.

Tonight was our monthly SQL Server User Group meeting and our featured presenter this evening was Plamen Ratchev (Blog) presenting on performance tuning and query optimization. First off, he has an awesome accent. I think he’s of Croatian descent from what he mentioned but he rolls his R’s something fierce. I should’ve had him say the phrase “reporting services” a bunch of times just to make me giggle. Anyways, I’m way off topic…

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil

This was a good point to open with in that if you try to focus on only performance in the development phase you’re more than likely going to perform an epic fail. He then went on to show the differences between being reactive and proactive in terms of performance tuning. His take was that in Europe their development processes focused more on being proactive and trying to take care of issues before they become major problems as opposed to trying to run around putting out fires all the time like many a DBA (myself included) is forced to do on a daily basis. In an interesting story he relayed to us he told us about how he had come to the U.S. and pitched a European software that allowed a major car manufacturer to improve their processes. The software would basically alert you if anything in production fell “out of the norm” and would advise actions on how to fix it. The American company had a different approach to their process. They basically hired a team of high-priced consultants to come in for a week, measure everything nuts to bolts about what’s wrong in the production process, produce a report and leave (whether or not problem got fixed). What surprised me most was that he told us the manager told him if there’s a problem they pretty much just build another assembly line somewhere else rather than fix current issue. If you’re wondering why they need Federal bail-out money, this story should give you a slight clue.

The presentation continued on with things such as common performance issues you’re likely to find such as inefficient queries, retrieving too much data, inefficient or missing indexes and a few other things. This was a nice list to see for both devs and DBA’s alike so everyone is aware of these common mistakes. This lead to topic of problematic designs such as the “one-true lookup” table issue. This is when rather than normalizing your data someone decides its easier to just throw everything in to one large table and add columns as needed later on. I can see some of you cringe when you read that since you’ve probably seen that in production somewhere at some point.The surprising thing that came out of this example though (to me anyways) was that sometimes this setup actually makes sense for very specific applications such as a simple survey or a medical application that is only storing straight facts (i.e. patient monitor). Another oldie but a goodie is mismatched data types on join columns or filters. While this may work without a problem, when you throw a heavy work load at something like this you’ll see performance tank because behind the scenes the data engine is having to do lots of implicit conversions to process that query for you. So remember that little tidbit next time you’re planning with your devs and database developers.

The next interesting thing I learned was regarding data type optimization. Do you know what the one of the fundamental differences is between VARCHAR and NVARCHAR besides one taking up twice as much space as the other? NVARCHAR handles multiple collations while VARCHAR is more for single so if your application is only going to be delivered via a single, default collation then stick with using VARCHAR.

I could go on and on but needless to say this was an extremely insightful and useful presentation. Another user group member, Ron Dameron (Twitter), noted in Twitter this evening

…seen this deck twice now. Learned new stuff both times. Thx Plamen

If you ever get a chance to attend one of Plamen’s presentations at a live event I highly encourage you to do so as he’s a brilliant guy and presents well. If he’s not coming anywhere near you, you can still check out his presentation stylings by watching his videos over at JumpstartTV. In closing here’s some book recommendations he threw out at the end of his slide deck. Basically this is just an excuse for me to use the cool Carousel feature from Amazon!

A few weeks back I presented at the Tampa SSUG meeting on Policy Based Management with SQL 2008. Unfortunately due to sickness and apparently my own laziness, I never got around to doing a follow-up post or posting my slide deck. First the slide deck:

As a warning the deck itself is a bit sparse with how-to info as the meat of my presentation is in demo format. Given that fact, I will start a series of blog posts covering Policy Based Management and how/what you can do with it. Keep an eye out in the coming weeks for those! In the meantime a big thank you for everyone that attended the Tampa presentation, hopefully I’ll be giving this same presentation again at SQL Saturday South Florida in August.

This morning on Twitter my friend (and as per request, “handsome moron”, but trust me he’s no moron) Jeremiah Peschka (Twitter) asked about scheduled traces. I suggested he do a server-side trace. Unfortunately when you Google how to do this it just tells you how to do this via T-SQL but to my dismay I didn’t see an article on how to create/set this up from SQL Server Profiler. So this will be a quick rundown on how to create your trace using the GUI of SQL Server Profiler and schedule your trace to run at a time of your choosing (so that way you don’t have to wake up at 4 am to kick off a trace).

1. Cut a hole in the box…oh wait, wrong instruction set. Open up SQL Server Profiler and create a new trace. To open Profiler you can either launch it from within SSMS (SQL Server Management Studio) by going to the Tools menu and selecting SQL Server Profiler. The other way to launch it is by clicking on Start button, going to Programs, Microsoft SQL Server 2008 (or 2005), Performance Tools and you should see the shortcut for Profiler there.

2. Check your save to options. Since this will be running on the server itself you’ll have to make a decision here. If you save to table try to save to a database that is NOT the monitored server, preferably into a separate database/table you create for this purpose. You can create a separate database for this purpose and either keep the database for reporting purposes or drop it when you’re done with your analysis. If you don’t want to create a whole new database you can create a new table in an existing database. The advantages of saving to a table are that you can use native SQL tools to do your analysis or even Excel. Preferably you’ll want to save trace data to a file as its faster but for this demo the save to database method is the one we will use.

a. When you select this option you will be prompted to connect to a SQL instance of your choosing. Select the server you’ll be doing your trace on.

b. Here you can name your new table (creates table for you, assuming you have permission to do so).

c. Your other option is to save to file. This will create a series of .TRC files wherever you specify. The advantage to using this option is that your results become portable in that you can move and import those files wherever you need them to do your analysis (i.e. copy them to your local PC and do analysis locally).

3. Configure your trace’s stop time. This is important as you want to make sure you have a consistent slice of time measured. This can later be modified within the script if you’d like.

4. At this point we can customize our trace. Click on the Events Selection tab. Here you can select the events and filters you’d like to trace on. Be careful as to not choose too many options/filters as this can bog down the trace and cause performance problems on the server itself since you’re trying to log so many things at once. For purposes of this demo I’ve chosen the Standard (default) template and default settings.

5. Run your trace…and then stop it immediately. This sounds a little strange but keep in mind we’re not running our trace right now, we just want the script that makes up this trace. You start the trace by clicking the Run button at the previous screen. Once the trace starts you stop it by clicking on the Stop button or by going to the File Menu and selecting Stop Trace.

7. To see your script, open it up in SSMS. In the script you’ll see all the options you chose in the Profiler GUI. The @DateTime variable is the option for when to stop your trace so modify that as needed. Of note, check the number after @TraceID ouput (in screenshot it’s 0). If you leave it at 0 the trace file will fill up and not rollover. To avoid this, replace that with 2. You can read all the options on Books Online.

UPDATE: One of the unfortunate parts of this is that the scripted trace only supports writing trace to a flat file, so you can’t specify SQL table like in the GUI version. In this trace file make sure you specify location for your trace file location as well.

8. Now to schedule your newly created script. In SSMS connect to the server you want to trace on. Go to your SQL Server Agent and expand it so you can see the Jobs folder. Right-click the Jobs folder and select New Job.

9. In the New Job Step dialog, click on the Open button. Browse to where you saved your trace script file, select it and click Open. It might prompt you twice, just repeat. Once you’ve loaded it you should see the contents of your script file in the Command window. NOTE: At this point you can also choose what account to run this script as by selecting an account from the dropdown menu under Run as. Click OK when you’re done.

10. Next we need to schedule your trace. Click on the Schedules link on the side menu. You can pick an existing schedule or create a new one. Here you’ll want to create a New schedule. Give your schedule an easily identifiable name. Since we’re creating a custom schedule you can do a one-time job or you can choose to run this on a recurring schedule (i.e. once a month). The recurring schedule would be advantageous if you’re looking to monitor your SQL performance on a regular basis and want to make sure you trace the same actions every time. One thing to note is that you want to set your start time here under the ‘Occurs once at’ section. Remember, you’ve already scheduled your stop time for the trace within your script. Once you’re done customizing your schedule click OK when you’re ready to proceed.

11. Click OK to finish creating your new job. To see your newly created job look under the Jobs folder in SQL Server Agent. Congratulations, you’ve now created an automated server-side trace using SQL Server Profiler!

That’s right, I went the Fantasy Island route since everyone else took the good jokes. Well Tim Ford kicked off a blog meme with this setup:

So You’re On A Deserted Island With WiFi and you’re still on the clock at work. Okay, so not a very good situational exercise here, but let’s roll with it; we’ll call it a virtual deserted island. Perhaps what I should simply ask is if you had a month without any walk-up work, no projects due, no performance issues that require you to devote time from anything other than a wishlist of items you’ve been wanting to get accomplished at work but keep getting pulled away from I ask this question: what would be the top items that would get your attention?

I was tagged by Jason Massie (Twitter) for this one and I’ve got to admit I love this scenario especially fitting considering all the crap I’ve been through the last couple of weeks with catastrophic power failures, shoddy backup software and database corruption fun. So let’s get to it.

Get Control of Backups

Man, if I’ve learned anything in the last week is that I need to get the backup situation under control. My first task (and hopefully this will become a reality here sooner than later) is to get Arcserve the hell out of our environment. I’ve never seen a piece of software this unstable since the release of Windows ME. Half the time you try to do something the GUI freezes up. Ugh, but I digress. Basically I need something reliable to be able to handle my backups, and most importantly, RESTORES!

Learn that Business Intelligence Stuff

I love the fact that SQL Server has so much to offer and the BI part of it is not only interesting but extensive! Whenever I get a chance I watch webcasts and presentations on Analysis Services, SSIS and Reporting Services but sometimes it just starts going over my head (especially AS). I think if I could take that 6 months and head-down study that stuff I’d be alright and definitely add more value to the organization.

PowerShell

PowerShell, the magical language that Microsoft is pushing as the standard scripting language for administrators. I’ve watched Buck Woody do a cool webcast on PowerShell for Database Administrators but I haven’t had time to sit down and learn PowerShell the way I’d like to. Then again I don’t have thousands of servers to deal with so for me it’s really not a priority for me at the moment. But if I have those 6 months with Wilson staring at me from an adjacent rock, I might as well learn something cool and scalable.

Security Auditing and Cleanup

I jumped into my current environment with servers that were just kind of stood up with no real security policy and no real thought/regard to performance. Now that I’ve been here a year I have a “lay of the land” and if I had time I’d like to go back through everything and clean up stuff like built-in administrators in sysadmin group and whatnot. This task will definitely be helped using Policy Based Management but what I need is the time to plan, coordinate with our IT security team, and execute/enforce. Pipe dream? Maybe.

I’m sure there are probably a hundred other things I’d like to do including certification but I’ll leave the list as-is. Time to pay it forward and tag some of my SQL bretheren:

So in my first post I explained that my moniker was inspired by the rubber chicken hanging on the wall in my cube. After realizing that he’s now become an integral part of the office in that we all squeeze him whenever something breaks its time he had a name. That’s where you guys come in!

My chicken needs a name. Anything. SQL-related is cool but it doesn’t really matter as long as its befitting its awesomeness. After all entries are in we here in the office will go over the entries and decide a winner. Unfortunately I don’t have the backing of Godaddy.com on this one like the SQL Rap Contest so sadly I don’t have any prizes, just bragging rights. I’ll set the deadline to two weeks from today (Friday June 12th). Feel free to submit entries here on this blog or DM them to me on Twitter.

Today was Quest Software’s bi-weekly Pain-of-the-Week webcast and this week’s topic was Getting Started with SQL Server Management Studio. We were lucky enough to have not one, but two SQL rock stars presenting today in Brent Ozar (Blog | Twitter) and Michelle Ufford (Blog | Twitter).

Brent kicked it off with a quick intro and then handed it over to Michelle to walk us through the basics such as creating a database, adding tables/columns/objects/etc as well as going through all the options available such as creating Maintenance Plans, creating backups from within SSMS, and the SQL Server Agent. They even demoed some of the nice little tools available in the latest SSMS such as activity monitor and the built-in reports. A question was raised about if the new SSMS offered a view equivalent to the taskpad view in SQL 2000 Enterprise Manager and the answer is…kind of. If you’re using SSMS 2008 (which you should be since you can install it without having to have a SQL 2008 instance) that view has been replaced by the built-in reports. You can access the reports by right-clicking your Instance or even a database itself, go to Reports, go to Standard Reports and select which report you’d like to view. And like we learned yesterday, some of these reports come from our default trace! Another fun tidbit of information learned from this presentation was that if you choose to (granted this isn’t recommended) right-click a table and select the Edit Top 200 records, it opens the records in an Access-like grid and allows you to change data directly. But this is cool and useful, why isn’t it recommended? Well when you open records like this it puts a lock on those records so nobody else can get to them. So in a production environment clearly this is a no-no but at least you know the feature’s there. For the record the recommended method would be to do something like this:

After Michelle wrapped up our tour of SSMS, Brent then gave us a quick tour of Quest Software’s Toad for SQL Server product. This is a pretty slick product that is aimed towards the development community. It gives you basically the same things as SSMS for the most part but it also gives you so much more. For instance you are able to do a query and re-sort your results using column sorting as you would in Excel (i.e. click column name to change order). You can also do filtering via this method as well. What’s the big deal about this? Well every time you change your ordering you don’t have to make a roundtrip to the server to re-query your data, it holds everything locally so its speedy. This is where Borat pops up and says “very niiiiiiiiice”. There was also some slick thing it did with pivots but that was the point where my desktop froze so all I got was audio from the phone call, though the people commenting on Twitter seemed to like…whatever it is it did. But don’t take my word for it, try it out yourself and go download a 30-day demo of the product at Quest’s website!

That pretty much summed up the webcast this week! They’ll be doing a follow-up webcast on June 11th with more advanced tips as well. you can register for that webcast here. If this post didn’t quite do it for you, you can catch all POTW webcasts on-demand over at Quest’s website. Today’s presentation should be up in a few days.

Today during lunch was the monthly PASS webcast for the Database Administration SIG. This month’s topic was understanding the default trace and was presented by Jonathan Kehayias (BLOGTwitter). Jonathan did a great job as he very clearly explained the nuances of the default trace (i.e. what it really is, how you query it, what’s it do, etc.).

The presentation files are available at his blog. If you get a chance definitely check out the on-demand replay which should be available next week. On a side note this is my second or third one of these I’ve attended and I absolutely love it! I think PASS is doing a great job providing this sort of on-going training for the community and I also tip my hat to people like Jonathan who are willing to take time out of their day to present.

We kicked off the night with a discussion I led about social networking. As most discussions tend to lead, the topic dominator was Twitter. A few of us shared how Twitter has managed to help us in various facets of our job be it job opportunities, problem resolution or just general networking. A few of us in the audience are already on Twitter such as Jonathan Kehayias, Jack Corbett, Pam Shaw, and Steve Turner. Besides Twitter we also covered sites such as LinkedIn. LinkedIn was specifically mentioned as helping people find jobs. One member in attendance shared with us that he actually landed his current position because of Linked in which shows that social networking does work! One new thing I did learn out of this conversation was something called MSDN Social which I’ll have to check out. Thanks to Travis Page for pointing this one out to us.

Next up was Jack’s talk on SQL Server Profiler basics. I thought I knew enough about profiler but boy was I wrong! What’s funny is that Jack asked the room how many of us currently use profiler as part of our DBA arsenal. Only about a quarter of those in attendance raised their hands. He followed up by asking how many of us knew that SQL Server 2005/2008 had a trace running by default on install and even less hands went up. This is pretty surprising considering its a feature specifically built in to make it easier for us DBA’s to do performance analysis. Considering Extended Events is the next “big thing” in this arena coming out of the Redmond camp for SQL Server I think either we need to blog more about these sorts of things or Microsoft better come up with a Mojave Trace profiler and say “surprise, its really SQL 2005!”. Anyhow back on track…another thing I learned was that the profiler takes wildcards. Yes, that’s right, you trace on ‘ADv%’ or ‘%acon’. I thought this was pretty cool.

Another thing that’s new in 2005 Profiler (thankfully, since this particular issue drives me nuts) is the ability to pause a trace mid-stream, modify your trace values, and continue the trace without losing any of your previous data already collected. Speaking of pausing, a button that I just never noticed all this time was right up top (Auto Scroll) that stops the profiler from skipping to the latest data. I don’t know how many times I’ve been staring at the data scrolling by, see what I think is a problem query, click on it and by the time I do the screen rolls over two or three times so I have to go back and hunt for that line. In the words of Charles Barkley, “just turrr-ible”.

During the presentation portion where we were discussing dissecting deadlock issues, someone mentioned that one time they had a deadlock issue but it wasn’t showing up properly when they traced it using the 2005 profiler. Jonathan Kehayias enlightened us that mutli-deadlock victims won’t show in 2005 profiler and that this situation is common in parallelism issues (which this guy had). The 2008 profiler, on the other hand, does handle that issue and display it properly. The next tip I picked up and can’t wait to use was the fact that with 2005 profiler you do correlation of data between profiler and performance monitor. In order to do this you need to have started and ended both with scheduled times. Another note is the Counter Log from perfmon needs to be in Binary File type in order for this to work. Once you have your traces done go to File menu, and save your profiler trace to a file. Then open that trace file. Once its open you should be able to go to File menu and select Import Data. Navigate to where you saved your perfmon trace and open it. Once you have selected it you will see a graph overlayed underneath your trace. Now if you click anywhere on that graph you will get a line that shows exactly which point in your profiler trace that performance spike happened so you can see exactly which SQL is causing (if any) problems! Click on the image above for an example of this.

Overall I thought this was a great meeting. We had a packed house and good discussion and tips. Personally I feel I even got a little more out of it than usual thanks to social networking. Prior to Jack’s arrival for our meeting he and I have been interacting on Twitter so it made it a little easier to meet him for the first time which was pretty cool. Next month I’m slated to do a presentation on Policy Based Management so I’m pretty excited for that. Another exciting announcement is that apparently we’re getting Kevin Kline to come speak at our group which should be really awesome.