Assuming that DHH did recently say something along these lines, I’ll agree 110%. Hardware gets exponentially cheaper (or more powerful, whichever way you want to look at it) all the time. Programmer time (spent optimizing your app or optimizing your architecture) does not. This isn’t saying there isn’t a time for optimization—there most certainly is—just saying that it’s probably further off than you think.

The argument isn’t that clear-cut, though, because both sides make an incorrect assumption. Adding hardware isn’t free, and it isn’t just a cash expense: hardware has a time expense, too.

Each additional server takes time to manage. And in small companies, this is often done by… the programmer(s).

Not every server role scales linearly or easily. In a typical modern architecture, webservers, caches, and proxies scale almost linearly. But database access sure doesn’t, especially writes. Sharding increases backup and redundancy requirements, and replication increases application complexity and data fragility.

And not every type of hardware is constantly getting significantly better or cheaper. Parallelized CPU power usually follows this trend, but disk speed doesn’t. It improves much more slowly. Today’s 15K disks aren’t much better, larger, or more reliable than 2004’s, and we’re still at least a few years away from common, practical, affordable SSD use in servers.

So while it doesn’t make much sense to try to micro-optimize the CPU usage of your webservers (outside of algorithmic complexity reductions), it definitely does make sense to reduce database activity, especially writes and nontrivial reads.