On the other hand, there is a very wide gap between what expensive SSD can reasonably deliver and what much cheaper spinning rust can manage. Spinning rust can manage a wide range of use cases.

It's SSD that represents the niche: small data for very casual users that don't do much of anything.

If this were even close to true, large corporations would not use NVRAM technologies to back their incredibly critical data stores. That "spinning rust" in a mid-sized 8-drive RAID-10 array can deliver roughly 2000 operations per second. One 2.4TB FusionIO drive for example? over 500,000. There's not even any comparison here. The size and cost of the SAN you'd have to buy to come even close to those numbers using traditional platters is on the order of multiple racks, compared to a single PCIe card.

Hard drives weren't always large and inexpensive. In 1996, it was huge news when manufacturers could advertise prices less than $1/MB. Not GB, MB. 1GB drives used to cost $1000 less than 20 years ago. Current high-end Intel 840 pros cost about $220 for 256GB. They're clearly getting cheaper at a pretty fast rate. And that's with NVRAM, which is probably getting phased out in favor of ReRAM in the next 5 years. ReRAM is even faster, has higher density potential, and is cheaper to produce. It leapfrogs NVRAM by almost two orders of magnitude, to the point where it's basically as fast as RAM.

No hard drive, RAID, or SAN could ever say that, no matter how much cache, how fancy of a RAID, or what interface it uses. We're entering an era where persistent storage will effectively be an afterthought. To see people still defending traditional hard drives in the face of that is odd to say the least.

That's guaranteed to happen. The only question is the extent. There's bound to be a few who say, "Hey! I don't have to know what I'm doing. That one guy said so!" In reality, we know different. Progress is made by learning from the mistakes of others.:)

Far from it. I seem to recall a researcher I read about over a decade ago who was designing a chip that worked more like a human neuron. Superscalar pipelines is just how Intel does instructions, and even they're trying to get away from it due to the cost of cache misses becoming more expensive as pipeline lengths increase. Having a talk on not being constrained to accepted dogma, and outright throwing away all known concepts are completely different things.

The very fact that you and I can even have this conversation, is because we know what those things mean. We know they've been tried. We know their limitations and strengths. We know there are alternatives. Having a strong grounding makes it possible to progress, even through occasional setbacks. Standing on the shoulders of giants, and all that. Throwing away everything you know and starting from scratch is very romantic, but it isn't very practical if you want to collaborate with others.

'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'

I agree having an open mind is a good thing. There is, of course, taking things too far. Just throw away everything we've spent the last 40-50 years developing? Is there some magical aura we should tap into, and rock back and forth absorbing instead? Should we hum esoteric tantras under the enlightening influence of various chemical enhancements awaiting Computing Zen?

I think that's the key, really. Quite a few companies I know are switching from RHEL or CentOS to Ubuntu LTS releases purely because of familiarity. If that changes, Ubuntu will lose the corporate customers that actually matter.

Redhat got it right. I'm not sure why Ubuntu is having such a hard time figuring this out.

Hah. Actually, this time it was relevant. I was about to use a joke, "I don't think you have the authority to do that, son." But then I looked at his account number... I mean, really looked, and couldn't believe it. Has Slashdot really been around that long?

> All those things are true, but his point is exactly what you said yourself, PostgreSQL is finally adding "Enterprise features."

I said that sarcastically. Note the snide quotes around "enterprise.":)

PostgreSQL is gaining popularity partially because Oracle purchased MySQL's parent company, and because of the newer features. But really, it's been "enterprise ready" since the 8.0 branch was released. Maybe I'm biased, being a PostgreSQL DBA, but a lot of the contractors I keep company with that work with Oracle anecdotally suggest there's been a massive effort by several companies to replace Oracle with PostgreSQL in the last year or two.

And like a previous commenter said, DBMSs come in various shapes and sizes, and have been since the 70's. But like all technology, things mature, and Oracle is no different in this regard. PostgreSQL came from Ingres, a research project for a guy working on his PHD in the late 80's. And that version didn't even use SQL. It's not like someone sat down one day and said, "Gee, how do I clone Oracle?" You can have notable and disruptive technology in existing categories.

And as much as I roll my eyes at the mention of NoSQL as some kind of ground-breaking concept, it's a fairly innovative application of a key-store caching model that has varying levels of compromises depending on the implementation you pick. They're not all just clones of memcached. Anyone who argues Cassandra, MongoDB, CouchDB, or Redis are all technologically equivalent needs a boot to the head. They're all innovative in their own way and have different areas where they're more scalable based on usage patterns.

My point was that his fundamental assumption (at least as I read his comment) is wrong. This stuff has been slowly taking over and replacing commercial software for years, and the pace is just accelerating. It's gotten to the point that major multi-million dollar firms are reaching for open-source first, not because it's equivalent to some other product, but because it's better.

OP basically wrote off half the examples as pointless. I was just saying it's not quite that simple.

As a side note, I'll have to look into Informix. I've never heard of anyone comparing it to PostgreSQL and getting the results you claim, but it's worth investigation.

Yes. Because you, a random user on the internet, has such compelling arguments, that I can't help myself but read through your specific comments.

But I rolled my eyes and did it anyway, because hey, with that kind of challenge, there must have been some notable comment there I didn't see. So after reading through your comments, I can only admit I have no clue how any of your dismissive arrogance can be construed as any kind of logical and reasoned response to the rebuttals of my, or anyone else's comments.

I won't reduce your reply to a humorous ad-lib as I normally would, because you're clearly missing the whole point, or inadequately supporting your position. Telling me to put forth effort you're clearly unwilling to expend yourself, suggests a level of hypocrisy I refuse to perpetuate. Have fun explaining why everything FOSS ultimately sucks and accomplishes nothing, because hey, defeatist attitudes get a lot done at the end of the day.

PostgreSQL: Just a relational database, and usually behind the heavy-hitters in terms of features. Mainly notable for at least being competitive with the big, commercial databases.

You may say this, but that's because you're not a DBA. PostgreSQL drives Skype, Pandora, Reddit, and IMDB for example. Working in the financial industry, we use it, and a few other companies I know of are converting away from SQL Server to PostgreSQL. Financial companies are especially prudent considering the liability concerns, and we do a metric assload of testing; we don't just convert for shits and giggles.

The PostgreSQL developer community is probably one of the best organized and responsive I've ever seen outside the Linux kernel. They have a major release almost yearly, and the 9.x branch is especially notable, since they're starting to include "enterprise" features. It's one of the few DBs that can regularly post equal or greater performance than Oracle, and it just keeps getting better. There are a number of reasons for this, but it all involves all the storage and memory architecture improvements they've been incorporating in the last few years.

Its popularity was never as high as MySQL because, as you said, MySQL isn't a real DB. It's easy to set up, and gets the job done without being quite as stripped-down as something like SQLite. For just whipping up a DB-driven website, it was dead simple, and with the popularity of PHP, so too did MySQL gain momentum.

And there's another very active project: PHP. Argue all you want about their design philosophy, but that language took the web by storm. Python and Ruby are similar in the regard to being highly active and groundbreaking open-source projects. Rails and Django are both huge sources of newer websites these days, and for good reason. Hell, even Drupal is being used by The Onion, and that's another huge Python community.

You could just as easily say Apache is "just a web server". Especially considering Lighttpd and NginX have both been outperforming it for years now. They may not count because they're not huge community projects, but they're more than viable and used by major sites. My company's site, for example, serves 120MB sustained traffic all day long serving a financial trading application, and we run both NginX and Apache on top of PostgreSQL. The DB alone serves 10,000 transactions per second (and can scale to about twice that on our hardware) just nicely at peak times.

What, exactly, does it take for a release to be considered "worthwhile"? Abandoning major commercial vendors? They're doing that. Scaling to huge usage? Reddit uses an entirely open-source stack including PostgreSQL, RabbitMQ, and Cassandra, and just broke 1.2B page views per month. Not just any page views, but fully threaded forums with a moderation system. I already mentioned Skype and Pandora. Sit up and take notice? They already started doing that.

The term "clone" is also pretty subjective. Lots of projects are spawned simultaneously, and the commercial product inevitably reaches the market first because of the paid developers that just work on it all day long. But when a FOSS project gets some momentum behind it, it really catches up in a big way.

FOSS projects individually have their warts. But the market is also littered with commercial software that is a clone of a clone of a clone, and is either out of business, or a product nobody wants. The aspect of being FOSS is not a differentiating factor in that regard. But the good software, both in the commercial and open-source world, lives on, and the really good examples are adopted with greater velocity as they mature. Criticize open-source all you want, but discounting the it as an also-ran is a giant mistake.

I'm starting to wonder if consoles are a dying breed. They used to come out every 3-5 years like clockwork, with major advances every time. Now every maker seems to be phoning it in. And if Microsoft, king of the 66% hardware failure rate is the only one that takes the next round seriously, I fear for the future.

I salivated over the release of the PS2. I have tons of games for it, and most of those are JRPGs and DDR. That console just wouldn't die, and it seemed like everyone wanted to release onto it. My Wii library is decidedly smaller, and I totally skipped out on the RROD-box and kept waiting for the PS3 to come down in price. Looking through the game libraries of each, there's only two or three games I'd even want to buy anyway, which clearly isn't worth it.

So far, both Nintendo and Sony have said "meh" to the next console round. So I have to wonder why.

Yeah. I've been getting irritated about this myself. All of the new phones are dinner plates. And it's actually getting worse. It seems like 4.0" is the new minimum screen size, barring the iPhone.

Apparently "the public" wants bigger screens, and market pressures being what they are, that means smaller phones get the shaft. I'd be fine with a 3.7"... but everything good is 4.3" or larger these days. The 3.2" of the Eris feels a little cramped, to me, but these new phones are just taking things way too far.

The worst part is, all the good phones are the biggest ones, or can't be modded. The Eris modding community has basically moved to the Thunderbolt, save a few diehards. The Incredible II is boot-locked. I wouldn't touch LG with a ten-foot pole, based on my experience with the ENV 3 and ENV Touch. The Galaxy S line never came to Verizon, and the Galaxy S II (Function) is... 4.3", the new standard.

I'm actually off-contract now, but I keep waiting on phone announcements for something that either doesn't suck, or isn't a dinner-plate. I may have to just grit my teeth and go for the Galaxy S II, which has gotten universally glowing reviews from basically the entire internet.

I'm a writer, and an oppressive DBA. I love my wife, DDR, and anime. I've written a serial novel, dubbed Rabbit Rue [kildosphere.com], which is the first part of an ongoing trilogy. Maybe I do too much, but I do it well.