I recently had some correspondence with one of the other OmniNerd developers concerning the trade off between always following the conventions of our chosen framework, Ruby on Rails, and writing custom low-level code to specifically avoid Rails features that are inefficient in specific scenarios.

The baseline argument for convention is simple: if you stick with convention you help ensure future support as the framework matures, and you allow anyone who understands the framework to easily understand your application. Likewise, the baseline argument for customization is simple too: you can save yourself a lot of money that will be spent on hardware by writing code that most efficiently tackles your biggest bottlenecks.

Academic arguments are great and all, but I’m wondering more about the practicality of how one approaches such a problem with limited resources. Take OmniNerd for example. It’s not the full time job of any of it’s developers. Right now we’re 100% following Rails conventions, but as our traffic grows and we implement more complex code and queries, some conventional approaches are putting a decent load on our hardware. So we have two basic options: pay for bigger hardware, or spend more time writing/maintaining custom code that allows us to live on our existing hardware.

I’m curious if anyone out there has any experience with this sort of thing. Are there any natural “triggers” you see for when to spend the time and effort on customized optimization versus simply throwing money and more powerful hardware? Any lessons learned from actual dealings with this sort of thing?

I’ve always preferred convention and readability over hacks and tricks. For example, back in the day, in order to do a lot of graphics tricks on a 386 required assembly language routines, loop unrolling, etc. Basically, while an elegant algorithm may have existed it was necessary to take lots of little deviations in order to get reasonable performance. The price of course was readability, long term maintenance and not being “clean”.

This is a classic case of that. The site has adhered to convention, a fact I can attest to as MarkMcB has harassed me endlessly when I deviated and forced me to fix it. But what is going on behind the scene that cause some queries to bog down on a computer more than capable of doing the task if queried directly from SQL. I realize Ruby is an interpretted language which always incurs a hit and Rails is Ruby heavy on the backend.

But still, when operating systems and virtual machines and heavy applications all functioned just a few years ago on hardware only a third as powerful, is it too much to ask for a short operation not to bog the system? It is clearly easy to throw money at it as hardware is relatively cheap these days. But is the right answer money, plug-in special code, or optimize what’s there at the expense of convention?

if you stick with convention you help ensure future support as the framework matures

This is true if the framework has already achieved a kind of “middle age”. The problem is that Rails ain’t quite there, yet. When you look at some of the changes that came about with 2.0, I think that’s obvious.

There are some areas where Rails — and in particular, ActiveRecord, just aren’t quite mature enough. The kinds of things it does, it generally does pretty well — but there are common kinds of operations where it basically sucks. Perhaps the most common is dealing with tree structures like threaded conversations.

I think in a relatively immature framework, like Rails, there are advantages to be had in stepping outside the framework because that helps identify areas where work needs to be done.

What’s nice about Ruby is that, quite often, you can build your own extensions and inject them directly into the existing framework. Later, if your extensions become more widespread and absorbed into the framework, you don’t have much to do to integrate that.

You’ll get the productiveness and all easyness of rails-like framework and all the optimizations of hibernate and spring that everyone loves to have and hates to configure and mantain plus the widely know scalability of java.

So keep you’re hands clean, stop messing around under the hood and just use the car.

You need to look at any project in its entirety as a system. What is the end goal for all your work really intended to be? In a large corporation or large collaborative effort, it is always best to stick to rigid standards and methods because so many people need to understand the different parts of the system on an indefinite and ongoing basis. The tradeoff in system inefficiency is (usually) made up for in organizational efficiency across the whole of the group or corporation.
For a project which will always involve a small number of people and resources; what difference will it really make if you rigidly stick to standards as long as the deviations are well documented and there is good communication between developers?
On the other hand, if the end goal includes selling or turning the system over to another group or person someday, having adhered to standards whenever possible will be a huge selling point.
For any project — What is the end goal and what resources are you willing to commit to achieve that goal?

There are reasons for conventions, but sticking to them for conventions sake alone when your overall goal is unaffected by these reasons might be counterproductive toward that goal in the end.