Posted
by
Hemos
on Tuesday August 14, 2001 @07:30AM
from the all-the-news-that's-fit-to-print dept.

implex writes "Interactive Week had an article called "Data Underdogs" which they compare offerings of present Open Source Databases with present commercial offerings. In one part they mention ...On the other hand, MySQL developers now have a much-needed transaction management system: NuSphere last month made its Gemini transaction manager for MySQL available as open source code on mySQL.org, a site that the company recently launched. Complicating matters, though, is NuSphere's blood feud with MySQL AB, a Swedish company that runs a competing open source development site for MySQL code at www.mysql.com. No mention of the fact that MySQL AB actually created the product was interesting."

Instead of focusing in on the available solutions, the article simply spits out the idea that MySQL and Postgres are weak pretenders.

The FACT is that these databases are excellent solutions to a large number for MOST database problems. Sure, like all DBMSs, they can always use more features. But I don't want my DBMS to turn into an uncontrolled monster like Oracle.

For 99% of the applications out there, Postgres and mySQL fits the bill. If you're doing large scale distributed payroll using SAP, then I suggest you go with a big name...

But if you're an average-sized business, Postgres is a full-featured solution today. It is an inexpensive, fully-capable solution.

If you're into writing Oracle PL/SQL, a proprietary procedural extention to SQL, go with Oracle. (Note: PL/SQL doesn't work with Sybase or DB2 or anything else.) If you're into TransactSQL, another proprietary SQL extension, go with Sybase. Once you get into TransactSQL, you'll NEVER migrate out without expense. In fact, my shop, an Oracle shop, doesn't PERMIT developers to use the PL/SQL ewxtensions. We learned our lesson after migrating from proprietary MS-SQL-Server extensions to Oracle!

And if you need a big company to support your 20,000 person payroll, go with IBM's DB2. Again, another fine DBMS.

One problem with proprietary DBs is that their docs will steer you toward non-standard SQL even when standard SQL will work. For example, Oracle will teach you to use NVL and Sybase will teach ISNULL, when COALESCE works in both databases.

The solution is to develop with PostgreSQL regardless of what your deployment DB will be. Their docs favor standard SQL. The code you develop will work with the proprietary DBs as well.

+++The solution is to develop with PostgreSQL regardless of what your deployment DB will be. Their docs favor standard SQL. The code you develop will work with the proprietary DBs as well.+++
Baloney. Their docs specificaly state they aren't SQL-92 compliant, and they don't try to be. Go ahead and create a cursor with updatable columns in Postgres, then come back and blabber on about "standard SQL."

If you actually read my post, you'll notice that I never claim that they implement all of SQL-92. I won't bother restating myself as the original post was quite clear.

P.S. Cursors with updatable columns did not actually work in MSFT SQL 6 until some time long after the marketing literature claimed they did. With PostgreSQL, you can expect more accuracy as to what is/isn't implemented.

What's the problem here? We've got and open source database that's being developed in two somewhat different directions by NuSphere and MySQL AB... seems like in the end it'll lead to two different, but each (for their intended applications) excellent products. I just don't see a problem.

What's the problem here? We've got and open source database that's being developed in two somewhat different directions by NuSphere and MySQL AB

I think there is a subtle difference between merely forking someone's code and forking someone's code but passing it off as the original. Nu does not own that code, and Nu did not originate that code. Yet they put up a Web site which tries to pass itself off as the official source. And this has somehow led to a journalist being so confused as to call the original coding house "a competitor." Sure they compete, that's good. But it is more accurate to call MySQL AB the originator and Nu the competitor. Nu is the new kid on the block that just entered the market. The more they try to pass themselves off as the official source, the more they upset people in this community who know better. They're misrepresenting the situation, and that's not ethical.

In slashdot's earlier article (a week or two previous), Nu's comment about "the GPL is not enforcable" doesn't exactly bode well either -- Nu is getting ready to do some seriously rogue shit, and I don't want to support that.

We all know that (needless) forking is really bad on a project. It makes more sense to have the base MySQL, and NuSphere stuff be add-ons/patches. At least until every gets together on what's going on.<p>
I think that NuSphere tried (is still trying) to muscle in on MySQL and become THE MySQL company. I sort of see it like the Sybase/M$ thing. For a long time M$ just resold Sybase stuff. Then they decided they didn't need Sybase anymore so they dumped them and put their marketing behind M$ SQL Server. No hardly anyone reconizes that Sybase wrote the core of M$ SQL Server......<p>
I think we all need to support MySQL AB as the original authors of MySQL. The people who risked their butts to bring it to us. I also think we should all tell NuSphere that they should be grateful to MySQL AB and learn to play nicely. (Unless they want to be view in the same negative light as M$)

It's been my experience that the only thing really holding the major open-source DBMSs back were exposure and solid transaction management. I hope that, by now at least, many DBAs have at least heard that there are alternatives to Oracle and SQL Server. As a programmer and database designer, I know I've been trying to keep my co-workers' and employer's eyes open to the possibility of having to use an open-source DBMS when the need arises.

It seems to me that implementing MySQL doesn't seem any more difficult that getting an Oracle database built. Although, I must say I've gotten spoiled by the GUI that's available in Microsoft's SQL Server Enterprise Manager. It's damned nice.

I guess the long and short of it is that it's good to have an alternative to Oracle, which is, in my opinion, overpriced for what it is. As long as products like MySQL keep getting better, they'll continue gaining ground as not just "alternatives" to pricey DBMSs, they'll become the DBMSs of choice.

I must say I've gotten spoiled by the GUI that's available in Microsoft's SQL Server Enterprise Manager. It's damned nice.

I assume you've heard of/used phpmyAdmin [sourceforge.net], a PHP web application that reminds me alot of the interface to MSSQL Enterprise manager (though not as full featured, I'll admit). Anyway, it's something to check out if you're interested. Saves alot of time on testing and prototyping.

I must say I've gotten spoiled by the GUI that's available in Microsoft's SQL Server Enterprise Manager. It's damned nice.

Is that the GUI that rewrites your query when you click "go"? Capitalising keywords is one thing, but rearranging the boolean terms in the WHERE clause, making non-trivial queries unrecognisable after they've been performed, is just obnoxious.

Actually, I've noticed a growing trend in the crowd of programmers I run with is not to care about the implementation platform.. SQL is SQL, and you don't do transactional based stuff on the platform anymore, you abstract away from that into your middleware, and do things the way you want to. In REAL LIFE programming, you often don't have the chance or opportunity to spec out what your back-end platforms are, you have to deal with what's given, or what's legacy at Company XYZ Inc. You also are often dictated a programming langauge for the project whether it be java, c, cobol, perl, python or whatever.
The real value lies in being able to adjust to whatever the PROJECT calls for, and being able to implement on just about any platform you need with strong good design patterns. MySQL and Oracle both do exactly the same thing from my point of view, hold data in a relational format for storage and quick retrieval. Putting too much logic into the database only serves to slow things down in the long run.
Nasa Switches from Oracle to MySQL [mysql.com] shows us why Oracle putting all those bells and whistles in their product may lead to a weakening of their marketshare ultimately. The fact is, bells and whistles cost memory and processor, and there's a balance between the two that Oracle seems to be blithly ignoring.

There are four tiers to the SQL-92 standard, and even commercial RDBMS vendors to not conform to all of them.

Oracle, Informix, DB2, MySQL all have different optimizers and differing concurrency schemes. Oracle does not lock a row for reading when another transaction is writing a row. Informix will perform table scans on certain queries where DB2 will not.

This "growing trend" you are talking about must be coming from inexperienced programmers working on trivial or single-user applications. In REAL-LIFE the security of data and usability of the client are paramount.

The fact that you would even say that MySQL and Oracle do the same thing displays your complete lack of knowledge regarding what modern commercial database products are capable of. Leaving all the programming logic in the hands of applications developers re-invents the wheel, escalating costs while introducing more bugs into the system.

wow, where have you been the past couple of years.. Ever hear of n-tiered development? Or actual projects for actual companies that happen to store data in completely disparate databases? How you do manage a transaction that includes RDB, MS SQL Server, and the provisioning hardware? Oracle's little transaction manager won't do it. So you stack a layer on top of that and do your transactions there. Prep the transaction first, then commit it to the database(s). MySQL and Oracle at that point do EXACTLY the same thing, only MySQL happens to do it a bit quicker.

n-tiered architecture is used for exactly this purpose, to provide a layer of abstraction for both the data providers AND for the application.

My application shouldn't care what dataproviders it's talking to, it just knows that it has an order for whatever.

My database shouldn't care what transactions are happening, because my database is there to store & retrieve DATA.

so we utilize a middleware that has the logic for managing those transactions. Works great, allows me to use whatever database the data is stored it, allows me to provide whatever App interface the users want to use, and allows me to write the transactional piece in a very modular, easy to interface with manner.

I don't think you can "abstract away transaction related stuff" from the RDBMS. Please correct me if I'm wrong; but I believe that you simply can not start with a RDBMS that does not support transactions; and slap a transaction processing middleware on top of it to instantly have a database application with transactions.

The standard transactional interfaces that are increasingly becoming popular nowadays (like JTA) depend on the database supporting transactions; most likely in the form of an XA-conformant programming interface.

But mirroring the success of Apache and Linux will be no small feat for the three most popular open source databases - InterBase, MySQL and PostgreSQL - which combined represent less than 3 percent of the market, according to even the most optimistic estimates of the suppliers themselves.

Did this number strike anyone else as too low? This is the first time I have seen any percentages on OS DBs. Alright, 3%, but of what? Does this mean that less than 3% of all bytes stored in any database are stored in an OS DB? That may be, but I cannot believe that less than 3% of all databases running are OS.

The thing is, when I started studying computer science, we only ever worked on mSQL boxes. My first job used Postgres and the second used mySQL. Anything I work on myself is mySQL. Granted these jobs have all been webrelated, but when you think about how many ISPs offer mySQL/Postgres preinstalled (not to mention linux distributions, if that counts), 3% seems rediculous.

Well I suppose MS Access runs on one or two computers out there... that might raise the non-OS score.

"Market" is a word that refers to a place where
things are bought and sold.

When a product has 3% of a market that means
that 3% of the total amount of money exchanged
for that category of product is given in exchange
for the product in question.

Now, you may argue, with considerable
justification, that market share is a
meaningless metric of open-source software acceptance, but for the mass of
humanity that isn't still living with their
parents, markets are what put food on the table.

Think of how many VB-based desktop apps are out there - figure that probably 70-80% of them bundle some form of the Jet engine, and/or use Access locally. That's been spreading for *years* and if you count that, 3% of the market for MySQL, PostgreSQL and others doesn't sound too far out. Of 'web-related' databases, I'd figure MySQL and PostgreSQL are probably a lot higher - certainly in the double digits.

Thanx for pointing it out. You know that it is a whole feature they did... AFAIR if you click "next" on the bottom of the page, you will go through all the articles they wrote on the topic. I tried submitting this to/. but no luck.

While I appreciate the sentiment of the article, it seems rife with misinformation. For example:

"Another feature that NuSphere is adding to MySQL is replication..."

Er, I'm pretty sure that's been in there for quite some time, through the master/slave system.

"The language resembles Oracle's PL/SQL (Procedural Language/SQL), except that PL/pgSQL offers the use of functions only, not procedures. A function call always returns some result, while a procedure may execute certain operations without returning a result."

"But some Web businesses are finding that they can function perfectly well with open source databases."

Yeah, I guess the features I like are speed, stability, ease of deployment, and excellent development tools. PostgreSQL and MySQL have all of this in spades; the commercial databases I've worked with usually seem clunky and contrived by comparisson.

I use Interbase and am always amazed at how little attention it gets compared to MySQL and PostgreSQL. It has a lot of the more advanced features people claim aren't available in open-source databases, e.g. the online backup capability the article went on about. It had transaction support from the get-go, supports stored procedures, and on and on.

It baffles me that the author of the article could know about Interbase and yet seemingly not even look at it in the course of researching the article.

The article seems to claim OSS DBMSs aren't as succesful as commercial solutions. Unfortunately, it follows the typical capitalist ideal that something has to make money to be succesful.

OSS flies in the face of American ideologies, and this is just another example of big corporations not seeing the value.

Certainly Oracle is much more robust than MySQL right now, but I think the OSS ideology of trying to make the best product will eventually beat out the ideology of trying to squeeze as much money as possible. Technology will prove to be the end of traditional free-market theory, I think, because there is no longer a solid commodity with value. Things can be copied, so value has to be placed on the creators, not the product.

The article mentions that the extreme complexity of database management systems is a barrier, which is true. There is one other thing the commercial vendors have that is a big challenge for the "underdog" OSS vendors: trust.

Companies keep everything on database systems. Hundreds of geek-hours must go into the design of a database application for a company. Whatever system a company chooses, they must be reasonably sure the system will:

almost never fail

be supported by a stable company and

integrate well with other systems, into the future.

A smaller price tag may be a good start to target smaller companies that don't rely heavily on database applications, but the reason Oracle can charge $15k/CPU for 9i Standard: the reputation is worth it.

One of the things that I like best about PostgreSQL is the fact that the developers are brutally honest about the software. The core PostgreSQL developers have always been quite frank about which parts of PostgreSQL were ready for production, and which parts were kludges, or were largely untested. The problem with commercial databases, even good ones like Oracle, is that the people who know where the rough edges are aren't talking about them. That sort of honesty goes a long way towards building my trust.

PostgreSQL has an amazing featureset, especially considering its price. I think that fairly soon Oracle is going to wake up to the fact that the database is becoming a commodity market, and quite frankly, they aren't likely to be competively priced.

I don't think you're likely to find that happening on the commercial level. It was so long ago I don't even remember where I read it (though it's the sort of thing that a/. poster might say), but I once heard it said that acknowledging bugs is probably a great way to tick off your investors.

"What, there's a flaw in your product?"
"Yes, but we can fix it pretty easily -- have the bug fix out tomorrow..."

Whereupon the non-technically-inclined investor writes the developer's comments as a sign of weakness and sells. I think that's also why commercial bug fix releases are such a big deal -- probably Microsoft's other reason for cramming all kinds of new features into their service packs.

I know I am not likely to find Tom Lane's (a PostgreSQL hacker) honesty from the vendors of commercial software. That's the whole point. How many times have you listened to the representative of one software firm or another talk about their new "features" only to find that when deployed in a real life scenario the new features fold like cheap card tables? I personally have seen it far too many times. Hiding these flaws may seem like a good idea to a marketing executive, but all this does is guarantee that I won't trust that company in the future.

Honestly truly is the best policy, and that's part of the reason that I like PostgreSQL. The PostgreSQL core is quite frank about what parts of their software are ready for prime time, and which parts aren't. More importantly, by skimming the pgsql-hackers list I can see what's being worked on, and how far they have progressed. That way I can decide for myself what features will be ready for production by the time my new stuff is ready to roll.

When it comes to trust, I would much prefer PostgreSQL's openness over the assurances of some faceless corporation that wants my upgrade fees so that they can have the revenues they need to keep Wall Street off their back.

"extreme complexity of database management systems is a barrier," this can be taken two ways:

It takes an 8 ton gorilla of a database system to get the job done.

It takes alot to manage an 8 ton gorilla of a database system.

I believe the first may be true depending upon the needs, but the second is a given. What has not been stated is that the more complex the database system is, the more likely there will be errors on the part of those implementing the system if they are not fully trained. In some ways, the difference is similiar to the difference between operating a truck verses operating a jet plane. Both are designed to get stuff from one place to another, but the number of controls on a jet plane and the amount of data being produced by various meters is considerable.

People really need to weigh two parts of the issue.

Does our task require a jet or will a truck suffice?

Are we willing to pay for proven jet pilots, or is it safer to pay for an equal number of proven truckers. The key point here being "proven."

To often I am seeing agencies who buy into the idea that the only "real" solution is the jet. This is a humorous thing to see when the jet is only used for trips across town...

You only need an aircraft carrier if you are landing fighter planes at sea. If you are just going fishing, a rowboat is fine.

Oracle is nice, but it is completely overkill for most projects. PostgreSQL has many of the same features, including all of the referential integrity features that make Oracle so nice to develop for, and it comes with a fishing boat price.

I only have experience (but lots of it) with Oracle and - unfortunately - SQL Server. Until SQL Server 6.5, I think it sucked compared to Oracle. But now they're more on level, although Oracle has always, and still does, handle huge data warehouses much better.

That aside, I worked for years with a 4TB data warehouse for a major credit card company. It was Oracle (7?) on a Sun E10000 (22 processors, 1GB ram) and it was screaming. We barely used any "advanced" features that were unique to Oracle. But what impressed me was Oracle's support. They had an office a few miles away and would send DBAs over to help out. Our DBAs were excellent, but when it came to very low-level tweaking, these Oracle DBAs knew their stuff. They would mess around with the OS to keep it as efficient as possible. And if there was ever any kind of failure or error, they came over to check it out.

Now granted, my company paid big bucks for the support, but at the moment that sort of support can't be found for an open source dbms. These were highly skilled experts in the database they supported. I realize (partly from the article) that the current goal of open source databases is to grow in the low-end market - smaller systems and such - and I'd bet they'd stand up to large warehouses. But one big advantage Oracle and DB2, and to a much lesser extend SQL Server, have is their support. You can have a highly skilled technician in your office very quickly if you need it, beyond the support of a consultant could provide. I'd like to see that kind of support in open-source companies. That's when I think they'll give closed-source databases a true run for their money... literally.

This is no doubt why Red Hat is getting into the DB game: So they can sell support for PostgreSQL.

I worked in a smallish Sybase shop for a while (tiny I suppose in comparison to a major credit card company) and I remember our DBA running around tweaking this and tweaking that to get the DB to run more efficiently. What struck me is that we had maybe 6 or 7 logical DB's all on one machine with no single table exceeding a million rows -- and he spends all day tweaking? Why didn't the damn thing just sit there and run? Why the need for all the tweaks? Sybase exposed so much configuration detail that even a competent DBA is bound to shoot themselves in the foot once in a while. Seems to me a decent DB system should hide and deal with as much complexity on it's own as possible.

My experience from about 12 years ago. The Air Force mandated that Oracle would be the "standard" DB, because it was "portable". There was an on-base Oracle representative. Our project was working just fine using VAX/DBMS, but we figured we would call up the Oracle guy and get a quote. It was something like 40K, or about quadruple what we were paying for VAX/DBMS. We choose not to switch.

Another time, myself and a friend were experimenting with Oracle on an AT&T 3B2 (also mandated by the Air Force as the "standard minicomputer"). It was slow as hell, but that may have been AT&T's fault. whilst creating a simple form, and trying to do something basic, like hit the function key to add a trigger to a field, the forms app would blow up. So we call the Oracle guy, who helpfully advises "Don't press that key". We were unimpressed with Oracle's tech support.

A lot of commercial apps have plugins for whatever DB you want to use. Once they wise up and provide plugins for DBs like RedHat DB, Postgres, MySQL and so on, we'll start to see higher market saturation.

The second thing which will help is when we get more commercial apps ported to Linux.

This is already happening. The product I use every day - SAP [sap.com] is available (commercially) for Linux. They support all the big DB vendors including Oracle, MSSQL (ok, not on Linux), Informix, DB2, and their own (open source) database SAPDB [sapdb.org]. I'm doing my bit, my site runs on PHP/MySQL.

"Databases are dramatically more complicated than any Web server or operating system technology."

The above is a quote from senior marketing director Bob Shimp, from the article. I will give him the Web Server - which is not to say that it is not complex, but likely not as complex as a robust relational database. I cannot do the same for the OpSys. There is a dramatic difference in the levels of complexity between a monolithic single-user non-multitasking operating system (such as DOS) and a multiprocessing distributed parallel asymmetric (etc etc) OpSys. The quote is not grounded in any sort of evidence, and I have serious doubts as to whether the 'marketing director' would have ever encountered a kernel that did not come from a bag marked 'Orville Redenbocker'. It is simply misguided and misinformed, and the general intent seems to be in undermining confidence in Open Source DBs. (... furthering the myth that open source is 'unreliable'.) Threatened? He likely should be.

I'd like to see an object data model (ODM) open source database come into the scene. Now that would cause a ruckus, challenging both the bottom line and validity of the relational model!

Did I say immediately? No. Given the course of programing languages over the last 20 years, object models seem the not too distant future for database applications. Whether they work well or not is a question of both preference and application requirements. Of course new vendors will not leap at the chance to use an unfamiliar database model, and the money backing the relational models upon which several major DBs are based is pressing for it to stay that way. Of course people in support of the ODM are on the fringe right now, but so were Linux users just a few years back. If we stuck with the dominant model in all things, we'd have never progressed.

"They laughed at Einstein. They laughed at Newton. But they also laughed at Bozo the clown."

Take this over to comp.databases. There you will find a small, but extremely vocal, crowd of ODM developers with a habit of completely failing to learn relational theory. That, I suppose, more than anything else is the worst -- they dub their products 'post-relational', without ever learning what 'relational' is.

More importantly, you must learn the distinction between data models and programming languages. See my previous post for more details.

MySQL has two table types that support row-level locking and transactions. One is tied up in this contractual mess, but the other, InnoDB [innodb.com] has no such issues, and may even be faster for many purposes.

One is tied up in this contractual mess, but the other, InnoDB has no such issues, and may even be faster for many purposes.

We did recently quite a bit of Perl development using MySQL and InnoDB tables, and they worked (surprisingly) well. Having transactions (finally!!) in MySQL is a huge blessing.

Somewhat related...while the article mentions that MySQL and Postgres don't have the large application development support infrastructures that the bigger commercial database have, they can be a lot quicker to prototype and develop with because of their relative simplicity.

We're in the middle of migrating our application to DB2 on RS/6000, and I have to say I'm missing MySQL's simplicity of administration and configuration...you can try out a lot of new ideas quickly with MySQL, whereas a big chunk of our time at the moment is spent poring over DB2 manuals for obscure command switches and SQL options (the LOAD utility can be a barrel of laughs for newcomers)...of course if our DBA was a little more competent, but that's a different story:-(

(And yes I do realise DB2 is much more powerful/robust...I'm talking about ease of development and rapid prototyping!)

I've moved a major (~8GB) database from DB2 to PostgreSQL for a client. It runs faster and is easier to feed and administer now. I'm in the process of moving a similar-sized database and app from MS SQL Server to PostgreSQL for the same reasons (plus the openness of PostgreSQL and Linux).

I really like DB2, it's very powerful, robust, and scalable. But it requires a fair amount of admin expertise and time. Not so much as Oracle, but much more than PostgreSQL.

What, frankly, suprised the heck out of me was the fact that nearly all of my queries (this is an audit system, OLAP, not OLTP) ran between two and four times faster under PostgreSQL. That adds up pretty quickly!

As far as the application development support infrastructures, I'm not really sure what is meant by that. The current implementation of stored procedures in PostgreSQL falls short of what DB2 provides, I'll grant. But support for C, Java, Perl, PHP, Python is all there. It's a pretty high-speed/low-drag setup, IMHO.

The set of problems for which PostgreSQL is the best solution is expanding pretty rapidly. I won't pretend that it's the be-all RDBMS, I don't think such a thing exits. I would say that it's worth a serious look for many situations.

Imagine that, a consultant that not only lowered the maintenance and upgrade fees for his client, but also delivered a solution that was easier to maintain and ran faster to boot. I bet the port wasn't even that difficult. PostgreSQL is getting to be quite competitive feature-wise.

Believe it or not, this is precisely the type of things that most employers want. However, most IS groups are too busy with CYA tactics to ever even worry about providing the best solution for the job. They just want to choose software that is safe, and in those cases the more expensive it is the better.

Your response was literally the funniest thing that I have heard in some time. Someone replaced DB2 with a low-cost, low maintenance PostgreSQL solution, and your suggestion is that he should have instead spent his time reading some arcane IBM manual.

One is tied up in this contractual mess, but the other, InnoDB has no such issues, and may even be faster for many purposes.

That might be the understatement of the year. InnoDB touts itself as the "fastest disk-based database" currently on the market. It's a pretty tall-order, but it lives up to it. Our internal benchmarking tests for our application purposes show it to be about 7x faster than an identical PostgreSQL 7.1.2 solution. I've seen reports on the mailing lists that it can be up to 18x faster. You also get the simplicity and maturity of MySQL. The InnoDB benchmark [innodb.com] page has their own benchmarks, which pretty much mirror what we've seen internally.

Of course, MySQL has other drawbacks, namely that it doesn't support triggers or table inheritance or some of the more complex nuances of standard SQL, but the 95% of stuff it does have is very fast, and the other 5% can be handled in code. MySQL isn't popular because it's open-source, though. It's popular because it's good, free, and most importantly, extremely easy and intuitive to use.

It's a pretty tall-order, but it lives up to it. Our internal benchmarking tests for our application purposes show it to be about 7x faster than an identical PostgreSQL 7.1.2 solution. I've seen reports on the mailing lists that it can be up to 18x faster. You also get the simplicity and maturity of MySQL. The InnoDB benchmark page has their own benchmarks, which pretty much mirror what we've seen internally.

Just a quick look at the benchmarks link tells me that they have fsync turned on on Postgres. What exactly is fsync? Every time Postgres touches the disk, it sync()s. Slow? Hell yeah. But you won't lose data in the cache. It's turned on by default.

I realize that Postgres isn't the fastest in the world, but it's not 7x slower on 100k inserts. That's just bad benchmarking. Deceitful even.

If fsync is not on, I apologize. However the link mentions no performance tuning other than buffer pools and log buffers. If Postgres is defeated by 7x (18x?!) in a fair test, I'll concede. However this looks like the MySQL testing benchmarks on mysql.org; bullshit, plain and simple.

You probably already know this, but fsync isn't as relevant to perf as it was on Postgres7.0 and lower. 7.1 uses write-ahead-logging (same as Oracle) and writes that info into tables when it's convenient. 7.0 and lower had to write into the tables & indexes on each fsync, which means moving around a hell of a lot more data. That's really why 7.1 blows the doors off MySQL in recent tests. That's also why InnoDB is so fast -- it has WAL as well. As an aside, InnoDB has an 8k row limit...sound familiar? -It should, Postgres had that last year.

As for data lossage, well, there's a lot to discuss about on-disc caches and power supplies, SCSI vs. IDE. The best guarantee against data loss is still a big-ass UPS and some really fscking attentive operators.

Of course, MySQL has other drawbacks, namely that it doesn't support triggers or table inheritance or some of the more complex nuances of standard SQL, but the 95% of stuff it does have is very fast, and the other 5% can be handled in code.

The instant you decide to move data and/or referential integrity from the DB into 'code', you've lost the battle. I can't believe someone would even suggest this. Sure, stored procedures are a plus and most times they aren't necessary, but you simply cannot have your integrity checks outside the DB. That's the whole god-damned point of an RDBMS!
If your integrity depends on unusual interactions in the data store, stored procedures are often the only way out. And you can't have stored procs that work to enforce integrity without triggers.

If you haven't got the integrity, you may as well be using a hashed filesystem to store your data for all the difference it would make. Hell, the hashed fs would probably be faster since it isn't pretending to be an RDBMS.

The reality is that the presence of RI helps sophisticated query rewrite engines to do some very nifty transformations. So having RI actually helps you get better performing plans for read queries and hurts performance a little for writes.

I totally agree. If the backend knows that it can't compromise integrity though optimization (i.e. an optimization trial fails to maintain RI, etc.) then it also knows that when such tests don't fail, it has an optimized access method which kicks ass.

Maturity? Just how mature is a RDBMS that doesn't even support referential integrity? Sure, it may speed things up sometimes, but do I really want to make my app code twice as big just to handle referential rules that should have been in the database in the first place?

My only problem with Interbase (6.0 Borland Server Edition -- Commercial) is that in order to do even simple SQL based manipulation of data, you have to actually physically write the functions into the DB. They aren't there natively (not in 6.0 anyhow). I have only recently started playing with 6.1 Open Source, but after talking to Borland in the past regarding 6.0, I'd be surprised if they "fixed" the issue.

These functions include things as simple as SUBSTR(), but also limit doing things like DECODE() and NVL() (to allow conditional DB selects). MySQL and Postgres support such functions natively, or have a similar native function.

Because of MySQL's design, there's a silver lining here. The Gemini back-end (which, BTW is the guts of the Progress database (NOT PostgreSQL, which is a competing open source database), under a different name, and open sourced) is totally stand-alone in the sense that the MySQL folks just have to continue to support the table management API that they already had for things like Berkely DB and accept bugs from everyone including NuSphere.

Outside of that, they can stick their fingers in their ears and yell, "lalalalala, we can't here you!" all they like at NuSphere, and no harm comes of it. NuSphere for their part can stick their fingers in their ears too, because 99% of their effort goes into the Gemini back-end and their Apache/PHP/MySQL shrink-wrap bundle.

These two can feud all they like, and still work together seamlessly. This is the part of the open source benefit that most closed source types don't get yet. When they do, it's going to rock thier world!

MySQL uses Progess? I had no idea. I used Progress for a short time and found it to be one of the most ridiculous and obtuse pieces of crap I've ever seen. But MySQL? Love it. My recollection of Progress is that it was more of an integrated development environment, with the DB wrapped in with code. Is this not true?

You're thinking of their 4GL. Every major database vendor has a 4GL ("fourth generation language", not my choice of terms). Most of them are really awful, but they have their place in certain business settings (right next to COBOL).

Progress the database was actually pretty nice, and had some features that are still not supported well in the rest of the commercial databases.

Oracle is saying that the Open Source comunity is not 'capable' of producing a dominant database

No. The point is that the design of a great DBMS takes a lot more unity than the large-scale projects OSS has tackled previously. In a DBMS, there must be an internal set of standards for everything from datatypes to join optimization logic.

Databases just don't lend themselves to fragmented development the way operating systems do. Frankly, I'm skeptical that an OSS project could (using current development practices) pull together and produce something as capable and stable as DB2 or Oracle.

Databases just don't lend themselves to fragmented development the way operating systems do.

Please back up that assertion. While you are doing that, I would like to remind you that the Postgres developers have never actually met in person. Yours is the same argument that every commercial software provider has employed over the last 5 years against free software. I think the truth of the matter, that no one yet knows real theory behind project management, is far more frightening.

In regards to databases, what you have omitted is that the database problem is 2 decades younger than the operating systems problem. The OS problem was solved by the mid '70s. The RDBMS problem has only been 'solved', if you think it has been, by the early '90s. Linux was able to follow much more well-trod ground than Postgres, and I expect the development time of the latter to consequently lag. I have not, OTOH, seen any evidence supporting your assertion.

If you want stability in your database or in your OS, you have to follow the same principles.

First, you have to define/document the core functionality. Then you have to define/document your data structure. Next you have to define/document the modules that will provide that functionality, including all the interfaces. Only then do you start to write anything other than throw-away, proof-of-concept code.

Does the Open Source community have the patience to go through three rounds of development without any code? I don't know.

I do know that this technique works. I've used it repeatedly. Many years ago I was asked to produce the first ISAM (Index Sequential Access Method) for a PDP 11 that only had sequential & relative I/O. I delivered a self-maintaining ISAM in about 6 weeks, having had no previous exposure to the issues, but following this approach.

I've used the same approach to produce enterprise-wide MRP & CRM systems.

I've also used it to produce package software for manufacturing firms that my company successfully delivered, with a full, money back guarantee -- and none of our customers ever asked for a refund.

So I know the technique works with projects of great complexity. Complexity is not the issue. Discipline, to achieve functionality, compatability and completeness, that's the issue.

OSS projects probably could produce something similar to Oracle or DB2, in time, but is there a Open Source market for the features that make Oracle and DB2 special?

For example, how many users of PostgreSQL, MySQL, etc. are planning on setting up a database that can stay available through nuclear attack or power grid failures? Generally, these super-duper robust databases get set up on several "big iron" servers, which means the licensing and administration for Oracle or DB2 just isn't that big of a deal relative to hardware costs, site construction costs, and staff salaries.

For certain projects, paying for Oracle or DB2 can actually save a heap of trouble, since they are so damn capable. This can make them more than worth their price.

On the other hand, I cannot advocate these super databases for small projects. That's where the current OSS databases fit in quite well. Therefore, would OSS really benefit from trying to compete with Oracle and DB2?

From Oracle: "Databases are dramatically more complicated than any Web server or operating system technology."

Somehow, Oracle is saying that the Open Source comunity is not 'capable' of producing a dominant database...

Not being overly qualified on the issue - but I thought they would both involve much the same issues - concurrency (e.g. - SMP in OS's), failsafe (Hot swap raid, clustering, and a few features yet to be developed for linux), reliabilty/stability (Linux has this!), efficiency, etc etc. It seems several Free OS's have solved all the issues that database manufacturers would face - so what are they claiming could be so complex that Free software people couldn't cope with?

I'd agree with the article that the Open Source offerings currently have many limitations when compared to the commercial ones, but this is more due to their lack of maturity than anything else.

We should remember that oracle & db2 have had over 20 years to get to where they are now. Have a look at this article [joelonsoftware.com] to see what I mean about maturity of products.

However the open source community has several advantages and disadvantages over the commercial players. 1. we dont have all the legacy bloatware which makes the commercial offerings so large 2. we're able to design using current best practice, not something which was dreamed up 20 years ago and no longer applies (no I'm not talking about the relational model but things like distributed storage, sans, nas etc).

However, we also don't have the guarentee that the original developers will still be here in 10 years time, working on the software and adapting it for new needs. Admit it, how many people are prepared to dedicate their careers to a single piece of software? not many. so can you understand why commercial companies are less than eager to use open source for critical/production systems?

commercial development has the same problems - programmers don't stick around for ever, and it can be tough to maintain someone else's code

I agree with the second part (since thats what I'm doing at the moment). However as to the former, companies do have the advantage of stock options and lock in periods.

That's what's allowed Microsoft to gain the position it has done, staff are recruited for a project and they only get their stock options if they stay for 4.5 years or more. This way you can get the same people working on at least 3 releases of a product, so by the end of their tenure they should be able to solve most problems and know the code inside out.

And of course at the end of 4.5 years the company can then offer them even more to stay or go away. 8)

I agree having the source is good insurance but it still costs money to get people up to speed if the documentations crap so firms will always go to the org. which can supply the help required (e.g. support staff, documentation, bug fixes) even if it costs more in the short term.

Bah ! I remember in the early 80's when big iron buddies used to point ant laugh at dBase II. What they didn't understand, and what some of the big database boys and users don't understand now, is that larger isn't always better.

Databases like MySQL make it very easy for webhosting companies to offer free databases without loosing their shirts or minds. They make it very easy for students to learn SQL. They're also much kinder on resource.

Yes, I'd love to be able to roll-back pooched transactions, but then I have to commit everything as well. Certainly cascades would be slick, but poorly written, they can shoot your foot clean off. Likewise, I can see all the lame support calls coming in because users don't understand the triggers are attempting to maintain referential integrity on foreign keys.

Within a given context, sometimes smaller is suits the purpose better.

Whilst it's a good and interesting article, echoing many of my reservations wrt. open source databases, it misses perhaps the single biggest point that people need support with an agreed escalation process, for the DBMS implementation - often the single most important component in any system

If a database goes wrong (and in Oracles case, my experience is that that's often), and we can't solve it ourselves, we need to be able to get on the phone and speak to somebody who can help. Now, I know that there are companies that offer support for OS DBMS's, but Oracle, Sybase and IBM's round the clock support offering is what i'm after. and getting skilled technicians (possibly the development team itself) involved quickly. OK, so open source offers this as by merit of "use the source luke", but in a corporate environment, this is neither likely or necessarily sensible.

Another, and perhaps more important, aspect to bear in mind (and this is not covered by the article for obvious reasons) is that Oracle, Sybase and DB2 are not the be-all and end-all of RDBMS offerings. There are better, and often significantly cheaper, closed-source offerings out there. One of my current favourites (which I'm working with at the moment) is Clustra [clustra.com] - a DBMS that offers 99.999% availability, scheduled and unscheduled, pretty much out of the box, with Linux as their first released OS for the latest 4.1 offering

So, in a nutshell - Open source support offerings need to be improved, but don't rule out the smaller fish in this crowded, and very competitive pond.

Look at Great Bridge's website [greatbridge.com] and you will find that 24x7 support is available, for a price. Great Bridge also employs at least one of the developers of the PostgreSQL database. I don't know about any of the MySQL options but you can have it with PostgreSQL.

We do not offer a true 24x7 phone support, but we
get close - most support requests are answered in
a couple of hours and you are talking to the developer in charge of the code in question. Many of our customers have commented that the quality of our support is much higher than what they have ever seen from commercial vendors. For more info on our support options, see https://order.mysql.com/ [mysql.com]

What? A web developer bashing the SQL server basically designed for web development? Yeah, and for very simple reasons:1) No transaction protection means no reliability (and no, hacked add-ins do not transaction protection make) At all. If you use MySQL for anything more than a teenage girl's weblog, you're asking for trouble the first time your CPU spikes.2) No triggers means relying on slow connections to do all your work across servers. Update a table and need to update its relations? Well, you'd better know their structures implicitly.3) No stored procs: I couldn't fricking beleive this when I first used MySQL. What do you mean, no stored procs, no conditional logic in statements and no subqueries? MySQL basically requires you to code bad SQL...lots of crosstalk between servers and lots of iterative operations that should be done for you.4) A very shoddy GUI. The shoddiest, in fact...it's the only GUI I've ever used for an RDB that was worse than raw SQL.

If you're doing any real development work, drop MySQL like a bad habit and pick up PostGreSQL. PGSQL does everything the big boys (db2, MS SQL, Sybase, Oracle) do and fairly well, meaning it can scale like a motherfucker. I ported our site from MS SQL to two PostGreSQL servers in a day and a half, after a week of trying to rewrite all our SPs in java and C and basically reducing the speed of our hefty article management work to a pittance.

MySQL has a slight advantage over using comma delimited text files or a good XML parser, but considering that there's a much better option in PostGreSQL, it will never touch my servers again. It's Free Software -- Free as in Free Dung.

At all. If you use MySQL for anything more than a teenage girl's weblog, you're asking for trouble the first time your CPU spikes.

Better go inform Yahoo that all of their unscheduled downtime is a direct result from using technology that can only power a teenager's weblog.

We use MySQL because it's stable, fast, easy to use, and doesn't cost us a ridiculous amount of money to run. We've seen one of our systems scale up from 0 queries/sec to 1200 queries/sec on x86.

Statements like this poster's are so frustratingly inaccurate that I've written a paper on dispelling these stupid myths. People can and do and ENJOY getting work done with MySQL. Don't succumb to senseless prejudices.

Right. Way to disspell one of my statements -- but there's still no stored procs, still no triggers, still nothing that would make this more than just another "make do" open source solution.

MySQL is usable, yes, and fast, but PostGreSQL is more useful with similar speed. So why use MySQL? My guess is that most MySQL developers are of the dominant school in OSS that fights against anything taught in MIS classes, meaning no objects, no complex relations, no self-cleanup or code reuse. Me, I'm all about the OOP principle of modular design and self maintenance...PostGreSQL, thorugh stored procs and triggers, allows me to code "almost" as if it were an OODBMS. This makes adding new functionality much faster and less painless...meaning that for a slightly larger initial investment I don't have to much about in pages and pages of code to alter an update statement whereever it's used. And no, just putting the statement in an include isn't enough...this reduces your ability to include multiple statements within a single transaction, furhur reducing your number of necesary connections and vastly reducing the crosstalk which is always the biggest barrier in client-server applications.

So while you might be getting a few hundred extra connections per second by using MySQL, i'm reducing my connection count by a third. Your machine gun is highly effective, but I'll take my BFG.

and now, they've got live backup stuff (if thats what you meant) thats getting to be pretty well done (theres still some issues on some types of queries, and if you have two (or more i guess) servers backing up to each other w/ auto increment it can do some wierd stuff)

Erm, no. This forces you to shut down the database to have it backed up.

I am talking about something like Oracle's archive log mode where you can have everything constantly backed up, and you can restore your backup to bring the database to its previous state at time X. And it works - every time.

Until you have something like that you are admitting that it is OK to lose lots of data in the event of a failure that requires restoration from a backup.

It just means you are going to have to run more than one query to get the exact data you want.

Thereby ensuring that your application runs slow as hell!! After all, why instantiate all that overhead talking to the DB, searching the DB cache, scanning disc, and returning data once, when I can do it a half-dozen times!!!

The MySQL crowd just continues to remain ignorant of the fact that full SQL-92 support is not wanking, and it is certainly not a perf hit.

I don't know if there is a short answer to your question. To understand what the deal and difference is, requires several levels of knowledge.

For example, think of what it would take to organize your files. Then think of what it would take to automate the organization of files.

In other words, not only do you need to know what a relational database is, but also you need to understand it within the context of a database system. The article does an adequate job of keeping the discussion mostly on the former.

In other words,

some of your confusion may be over how databases view the world.
AND/OR

Some of your confusion may be how database systems implement databases.

My question is, why is this? Is it a result of the DBMS itself? Is it a result of different training and methodologies? Is it a result of the developers being even more eccentric than program developers? It just seems like most often there is a much simpler way of getting the same result.

The answer is that our database applications are more complicated because they actually do something useful.