I am sure we all could provide other examples of poor application design choices. The problem is that the file system and operating system most often have no choice but to do whatever dumb request the application tells it to do, as there is little to no communication along the data path. The usual solution to the problem is the hardware approach -- in other words, throw hardware at the problem. The hardware solution works to a point, and with advent of SSDs, the hardware solution works for a point farther down the path, given the lower latency and often higher bandwidth of SSDs and the ability of SSDs to handle more IOPS.

The question then becomes, is using SSDs to solve an application design problem the right solution? The hardware vendors, of course, will tell you yes, and in the short run they might be correct. Buying a few SSDs might be less expensive than paying to redesign applications in the short run, but in the long run just throwing hardware at the problem is going to have limitations. When that happens, you will have no choice but to re-write your applications.

Then, you will realize all of the money you have spent over the years throwing hardware at the problem, only to still be faced with the cost of the re-write, and you will not be happy.

Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.

Advertiser Disclosure:
Some of the products that appear on this site are from companies from which QuinStreet receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. QuinStreet does not include all companies or all types of products available in the marketplace.