Pushing the boundaries, going forward

Ever since I remember myself, when I was 3 or so, I remember having this big fat constantly hanging question “And what if I try this?“.

The question itself is simple, but it is so alluring that it is impossible to resist.

And it does not come cheap: I have broken many toys (as a kid) and many other belongings (as an older kid), just to see how they work and how far they can go if are tweaked.

(I have to really send a big THANK YOU to my parents, who never discouraged me from disassembling / breaking my toys; instead, they were supportive and always said “Hm, it is your toy, you can do as you wish, and maybe when you are done disassembling you might have enough parts to assemble two toys.”)

Anyway, being “grown-up” is not any easier than before – I still wake up before my alarm on most days with some variation of the good ol’ question:

And what if I try this… and what am I going to disassemble today… ah, that thing yesterday failed to assemble, but let’s try today… yes, eventually it got put back together and it seems to work better… can I tweak it to improve it…

And as a professional DBA, I am afraid, it is not much different. I just like meddling with things to make them better, or different (and to understand how come they were not better to begin with).

A few days back Tony Davis posted an editorial on Simple-talk about Adam Machanic’s presentation on the SQL Server Query optimizer (it is a very interesting editorial and very interesting discussion, you can read it here).

One of the topics in the discussion, however, particularly stuck with me:

Interesting ideas. I wonder how much these questions only come into play when we are pushing the boundaries of our systems to the edge. I would venture a guess that the vast majority of databases out there are still in the sub 100GB range and that the current optimizer can handle these just fine.

And Keith has a very good point. Most likely the majority of the databases are not too demanding (and can be handled even by the mediocre performance of SQL Azure), but in my opinion people have in general two driving forces: either “this is good enough” or “why not make it better”.

I almost always go for the second one (unless I am tired, sick or constipated ).

If I go by “this is good enough”, I don’t feel like getting out of bed and I think that maybe I can do whatever later. But if I go by “why not make it better”, then I jump like I was bitten by a wasp and just do it.

Back to databases – it really does not matter what size it is, and whether the system can be handled by the hardware in a way that the users don’t complain.

Whether I have a 5 billion record 2 terbyte database or if it is a 2 Gb database, I still try to get my queries and processes running in the range of milliseconds.

I just like pushing boundaries, always have and always will.

This is why I am starting series called “Pushing the limits” and in it I will be writing about pushing the performance limits of SQL Server and its surrounding hardware.

There will be questions like: how can I exhaust the threads, how can I run out of memory, how can I make my network card use way too much CPU, how can get 5 second latency delays from my IO system, can I take a SSD drive to its knees before my CPU explodes, is SQL Azure really faster than my home computer and so on…

To get started, I must mention a small tool designed by me and developed by a brilliant friend of mine: SQL Latency Meter. It does exactly what the name suggests – it measures latencies within SQL Server.

Powered by Good Will. (Will is the person who offers his sympathy to others together with ideas, but who does not give any guarantee about the completeness of the concepts or the immaculate perfection of the scripts on this site. Use this site to grasp SQL Server concepts and to get knowledge. )