I think that you ought to try to tune your code, you ought to go back and refactor it when you can. However deadlines, and time pressures can force you to compromise. At some point, it does make sense to throw hardware at it. In web development, where you can deploy another server and scale out, I think that it's harder to spend a lot of time tuning when for $2-3k you can throw a lot of hardware into 1 or 2 more servers. If a developer costs $70k, then you can eat up $2k of time quickly.

The issue with many web developers is also talent. So many programmers don't know how to tightly tune code.

In SQL Server, you have a single server. To scale up, it's replace a server, often with much more hardware and it can be $10k. So it makes more sense to spend time tuning. There are still limits. You can still spend too much time tuning and not get new development done, but it's more worth it since a new server costs more $$$. You also will run into scale limits. At some point you can't buy a bigger DB server to handle the load from poor code.

In either case, developers (SQL or otherwise) need to learn more about better coding techniques. Learn how to write better code that performs better with less resources.

You speak right out of my heart. In my opinion to write faster code is not only a question of skills but also a mindset. To write tuned code is something people should internalize.

The best tuning is proactive and done by the architecture and system design! Tuning when a system became slow is reactive and often ends up with hacks and a "hope the performance is enough, for now" solution.

I know this is an old post but I had to say "Hear Here!" to it. Great post Tony. Phil is absolutely correct about how even the worst code on certain challenges simply blows away what a lot of folks buy hotter iron for... and the hotter iron frequently doesn't work at all.

Here's to all those folks that go the extra mile in code! Well done! {insert sound of someone clinking a beer and drinking it heartily}

--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013

No offense, but I think most of you are wearing rose colored glasses here.

Specifically, you have the following points backwards:

1) Failing to deliver a working product is guaranteed to fail your project, even if delivering it on time will have bottlenecks.2) Foreseeing *ALL* the bottlenecks can really only be done by top people. Most of us will have to see it work to identify all the bottlenecks, even if we knew where some of them would be3) Foreseeing scale of application and availability of investment ratios is something most of us will not be able to do from the trenches. These decisions often get made by management and bean counters. You will have to work with them, not complain about them.4) Implication of #3 is that while it might be cheaper in TCO to make a tuned app from the beginning, the value/funding may not exist until gets to the point where it has the bottleneck. By then, iron may be 10x cheaper.5) Throwing iron at a problem doesn't just make one problem faster, it usually makes most of it faster. This means you're not comparing a single problem to the cost of iron. You're comparing overall app benefit to the cost of new iron.6) Before you buy iron, you obviously should identify what part of the iron to invest in. If your server is not CPU bound, buying faster CPUs will not help much.

It's my opinion that the best solutions are not the "slick" ones which take a crappy architecture and makes it blazing fast for one feature. It's the solutions which make it so some junior person can come in, learn one standard for how to know what goes where, learn one standard for coding (perhaps a couple of reference implementations), and the monitoring already in place will yell at them when they do their stuff wrong, before it makes it to production (examples: Integration Tests/Continuous Integration/Continuous Deployment).

In a properly delivered design, you would test at an appropriate scale to where you see the application going in the near future. I am *NOT* advocating a pre-optimization. I am advocating you deliver something tested for the requirements you have (or should have).

Most of the time, you can get one of the top answers just by implementing obvious patterns over and over. Once that stops working, you should implement a new pattern, not just a one-off.

I hate to see crappy apps as much as the next guy, but if we're talking about investment vs rewards, we should really talk about this from the business point of view, because that's how we deliver the value other people care about.

Of course, buying new hardware can get you some time. But how much time would you get if your app is full of crappy code? There's no piece of hardware that can handle massive nested-RBAR. Some time ago, I handled to improve a query that ran for over 30 hours and didn't finish to make it run in under 5 minutes. I assure you, more power wouldn't be a solution as the time to get it would have been more than the time it took to tune the code.Yes, you might be going one step at a time, but you now that it will last forever. Tuning hardware is a fast and temporary solution that will demand attention in the near future. What will happen if management find out that the brand new server you got and solved the problem ended up with the same problem 6 months later? Is that cheaper?It's not about pre-optimizing, is about doing things right from the beginning and correct anything that hasn't been done right. Changing configurations might help as well improving the performance on the whole system.

Luis C.General Disclaimer:Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

Luis Cazares (12/26/2014)Of course, buying new hardware can get you some time. But how much time would you get if your app is full of crappy code? There's no piece of hardware that can handle massive nested-RBAR. Some time ago, I handled to improve a query that ran for over 30 hours and didn't finish to make it run in under 5 minutes. I assure you, more power wouldn't be a solution as the time to get it would have been more than the time it took to tune the code.Yes, you might be going one step at a time, but you now that it will last forever. Tuning hardware is a fast and temporary solution that will demand attention in the near future. What will happen if management find out that the brand new server you got and solved the problem ended up with the same problem 6 months later? Is that cheaper?It's not about pre-optimizing, is about doing things right from the beginning and correct anything that hasn't been done right. Changing configurations might help as well improving the performance on the whole system.

+1000.

--Jeff Moden"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T."--22 Aug 2013

My experience is that application tuning does not have to be expensive, and is usually far more cost effective and faster than upgrading hardware.

I have a methodology that I use for tuning applications that use stored procedures. I run a trace for an hour or so for all activity during peak loads, load the trace into a table and then run queries to identify the stored procedures that are the biggest aggregate users of CPU, reads, or writes. My usual experience is that by tuning the top 10 stored procedures you can increase application performance dramatically.

I originally developed this technique one day when our CIO decided to initiate an emergency effort to split a poor performing application to run across multiple SQL Servers. This would have never worked, and I realized we needed to do something quickly. Using traces, I was able to identify four stored procedures that used 97% of the total CPU usage, and with about 3 hours of tuning effort with 2 DBAs working on each procedure, reduced the average CPU usage from a steady 100% down to about 10%. Further tuning of additional procedures reduced it to about 7%. Problem solved in a way that additional hardware could never have done, and the effort was completed in less than 2 days.

Now I do this proactively for critical applications to identify stored procedures that could benefit from tuning. I run a trace, look at the top resource usage by stored procedure, and tune the ones that could benefit. It is important to continue tuning on a regular basis, since new or changed procedures may become performance problems.

Any query that takes 30 hours to finish isn't poorly written. It's just broken. Yes, you fix that stuff.

My normal metric for a query if it's correct, it will typically run in < 100ms if it's simple, or < 1sec if it's a report, or < 1 min if it's along the lines of a year end report done on transactional data.

The thing is, if you do those year end reports everyday, then 1 min is for me, broken. It should be redone to be < 100ms.

In no way should what I said be construed to say that a system with hundreds of gigs of randomly generated/stored transactional data, and crappy RBAR code can suddenly become usable if you throw more iron at it.