It certainly depends a lot of the circumstances. I found the article, though accurate factual, to be silly as a general case.

We use MySQL for several client sites, and it is an excellent tool. It's fast, it's cheap, and it supports the basic attributes of a SQL database. We don't need or use transactions on those sites! Wow, what a concept, you can program an application that works around this limitation!

The ACID test is a nice idea. I don't believe failure to meet it is something that rules out a tool. It just means I would not expect the system to be a transaction-dependant system. It would have to be mostly read-only, and the writes to it would have to be structured atomically.

We found Postgres to be OK, but somewhat "troublesome". A lot of strange errors (which I don't think we *ever* debugged), so unfortunately it's MySQL or Oracle for us these days.

Oh, and need I say MS SQL sucks? Don't get me started on how buggy that thing is.

We currently use MySQL on easy.to and server 2 - 3 Million, yes Million, requests per day. The MySQL server also serves as the User database for the e-mail server.

There are very few updates to the database itself, and in the 2 years we have been running it we have only had one instance where it failed, and even then we only lost a few Last_Hit_Times as we log all transactions.

If we were to implement something that required more than a few updates per minute, we would look into PostgresSQL or Sybase but as it stands now MySQL is the hands down winner.

One instance where MySQL looses badly to Postgress is in transactions per second. On a decent Postgress machine Postgress can do hundreds of TPS, while I've been trying to get MySQL to do just one transaction for the last two years:)

As a developer who has worked with both, there are several differences twixt the two. Support for LOBs is much better in 8.1.5; with certain optional models you can create full-text databases, and search them with an extended form of SQL; significantly improved high availability features; and more. I read somewhere that Oracle was considering calling 8.1 a version 9.0, but didn't because IT managers tend to be reluctant to upgrade to a major revision.

There's a chart on the MySQL site that compares MySQL against Oracle 8.0.5; if the comparison was between MySQL and 8.1.5 (or the most current version, 8.1.6), there would be almost no grey dots (indicating a feature doesn't exist;) there would be some more check marks (indicating the feature exists in some non-standard way;) and a whole bunch more stars (meaning the feature is fully supported.) If they went the extra step of adding features that are in Oracle but not in MySQL (like full text indexing and searching), there would be a lot of grey dots on the MySql side.

8i is a suite of products built on top of the "regular" Oracle database, most recently 8.1.5 or 8.1.6.

As other posters have mentioned, 8i might offer other additional features, but the major feature as I see it is the Aurora Virtual Machine, a Java VM embedded in process in the database. In particular, 8i provides an Enterprise JavaBeans container. (Sounds cool, but Oracle's EJB implementation is pretty weak, trust me, stay away from it for now.) 8i also provides the capablity of creating Java Stored Procedures--stored procedures implemented in Java rather than PL/SQL, which offers a lot more power and flexiblity.

By the way, for everyone on/. who's always knocking Java, you'll note that it is the sandboxed architecture of the Java VM that makes it possible for Oracle to build a VM right into the database without fear of corruption. You can't do this in C (or Perl), at least not without adding a great deal of custom "sandboxing" (security, and more) code.

Oh, I'm not advocating the use of MySQL at all, nor excusing it. I'm saying that for people who want to hack a database to hold say... lyrics, or the tour dates for their band, it works. It's not an RDBMS, and it's not something I'd trust anything of remote importance to me.

Remember, most websites suck. This includes database backed sites, and as far as I can tell, that's the most common use of MySQL. I've never seen anybody running anything of importance off a MySQL db.----------------------------

The fact of the matter is that most people making websites aren't well-funded companies. And most commerce sites aren't run off MySQL. The types of sites I see people cutting their perl teeth and using MySQL for are things like www.fruhead.com [fruhead.com] which is a fan site and has some discussion stuff.

Nobody would care if there was a little corruption, and realistically it wouldn't matter.

As for my argument that most people don't know how Oracle works, perhaps I wasn't clear enough. I'm saying that most people a) can't afford Oracle, and b) if they could, wouldn't know how to use it. I'm not talking about real companies. At work, I use Solid and Oracle, depending on the task. They work for their respective purposes. I'm not saying these people who like MySQL for odd reasons are *RIGHT*. I'm trying to explain the rationale. You're right, good developers *do* continue learning forever. But be honest and think about every 'developer' you've ever met. Were a majority of them good? Didn't think so.

Again, I repeat myself... I don't agree with the majority opinion, but I do believe I can explain it.

The fact is that for the majority of database backed sites, cost is far more important than functionality when it comes to databases. MySQL is cheap in almost every sense of the word. It costs nothing to use, administration is incredibly simple, and there's nothing particularly confusing about it.

A lot of people, even given the funds to purchase a DB like Oracle, don't know how it works. Thus MySQL works well enough for them. They don't care about the lack of stored procedures because they've never used them before. They don't care about the high-load write performance problems because they won't have load.

Basically, if you want a SQL interface to Windows Notepad, MySQL is for you.

From a functionality standpoint, Postgres has fantastic integration with Red Hat at the moment if you want an open-source database. If you need scalability, use the free Sybase and upgrade to the commercial version when you have the need and the money.

When you need to use a subquery, you need to use a subquery. If they aren't available, you've got to take several steps to emulate them.

Just imagine if someone built a Linux distribution like MySQL built their database. "fsck is too hard to implement, so we are leaving it out. We didn't have time to implement all the options to ls. Shared libraries don't perform well, so we don't use them, and we don't like glibc.

In a previous life, I worked at a place where we considered the Harlequin RIP, but declined to use it because it didn't grok our peculiar (non-ADSC) PostScript. We went with another proprietary solution (a bit too drunk right now to recall what). gs was never considered. We were driving a CTP imager, and if we had the slightest problem we had to be able to ring someone up and have them resolve the problem on the spot.

Which is in fact exactly the service we received. I don't expect, nor am I surprised, when I contact an author of an open source package about a bug to receive either no response or a response several weeks later. In a production environment that would be suicide.

So in answer to your question, people use Harlequin instead of gs because they are buying a service, not a program, and that service includes immediate problem resolution, and people that will speak to them there and now, not when they have the time to get around to the problem. It's not significantly faster (when it works) but it's significantly faster (when it doesn't work).

MySQL seems to be pretty much the default choice for people (who aren't companies) building a database-driven website. Can someone explain to me why? Unlike Postgres, it isn't a real RDBMS as the article explains, it isn't that much faster and sometimes it's slower, and it's not Open Source. What's behind its popularity?

Isn't part of the problem that Oracle, DB/2, Interbase, etc. actually work pretty good for people? By which I mean, if you have the money, there is NO reason to choose MySQL over them. This is not Apache vs. buggy bloated IIS or Linux vs. buggy bloated Win2000. This is like a Dodge Neon vs. a Miata. Most of the other open source products can say "we're cheap AND we're better". But not MySQL.

Another point is that an RDBMS is not like a webserver or an OS. You can't just tack extra feature onto the side of it to make it better/more competitive. Implementing something like rollbacks or transactions or subqueries (all important things lacking in MySQL) is going to touch every part of the code. And I don't think those kind of changes get done very well in a Bazaar environment.

- They don't know what they're talking about on the obscure technical issues that we like.

- Most people could use it just fine, but that's not the point!

- Okay, sure, its fast, but our database is safer!

- Why does everyone have to use MySQL anyhow, hmm?

This sounds like a case of sour grapes, pure and simple. One database is not going to be for everyone. Oracle is expensive, MySQL isn't robust enough for some people, and PostGreSQL is slow (compared to MySQL, for some common operations).

So, my advice. If money is tight, don't use Oracle. If data integrity is your utmost concern, don't use MySQL. If speed is an issue for your little database on your overworked computer, don't use PostGreSQL.

...And if you live in the real world, don't whine about how software application X is better than popular software application Y for reasons that many people don't care about, because you will be flamed to a crisp, especially on Slashdot. Just admit that software X has a place in the world, and some people, maybe even many people might want to use it, but they are other people, doing other things.

Example:

Microsoft Word is a popular word processor. But it is not robust enough for some people's needs. 95% of people might be able to do their job just fine having it crash and autorecover their documents, but for the 5% who are working on mission-critical data, or get their documents eaten, this isn't good enough. They would rather waste their time working on free solutions with bad Word DOC File Import/Export routines. For those who care about proper formatting and ease of use, however, Word's stranglehold on the market makes it a more useful product for now.

Notice that I didn't say "Word has no place in the market", or even argue that "Microsoft didn't know what it was doing when they made Word!", but rather showed some strengths and weaknesses, and let you decide which category applies...--- pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

Nothing like good old-fashioned IRC support: "My database is down, my company is going down the drain, the users can't work, what do I do!?!?!?" Typical IRC reply: "Go fsck yourself".

And you think you'll get better support on IRC for a proprietary database? Were it Oracle instead of mySQL in question, I think the statement you mentioned would be followed by several minutes of hearty belly-laughing.

If you want to be sure that you'll get an answer to your support request, you pay for it, just like with a proprietary app.

If what you want is raw, fast storage, use a filesystem. If you want to share it among multiple boxes, use NFS. If you want simple reliability against simplistic failure, use mirroring. Want a SQL interface to it all? Use MySQL.

The main reason people use MySQL is that its the quickest, easiest way of making your data SQL accessible.

People rave about the performance, but most of the time the performance is only useful to excuse quick and dirty programming, poor db design or poor system architecture. The real benefit is that it stores your data somewhere where you can import, export, and query it easily. That's all alot of people need, certainly, it's all I need for 90% of what I do on the web.

Yes, there's a good reason for it -- virtually all IT managers are mindless sheep. It's the old "no one was ever fired for buying IBM/Microsoft/Oracle" principle.

Bollocks.

You appear not to work in the very large e-commerce business. You assume that people that do are sheep because, well, _you_ can do your stuff fine with other products. You assume that because people do not buy the functionally best product they buy it because they are sheep.

I do not work designing very large e-commerce sites. I do work in a position where I see the RFPs and architecture requirements for these sites, and I have worked with most of the smaller products such as MySQL, PostgreSQL and so on.

Believe me, these tools are not even close to cutting it. Sure, many of those Oracle shops could (technically) use Sybase, MS-SQL, Informix and so on, but support contracts required for these sites are very demanding - engineers on site within hours, *CODE FIXED* within hours. Yes, Oracle will recompile Oracle and ship you a fixed binary within 24hours of a bug report - you just have to pay...

And yes, I am an IT manager, and yes I can write programs and put machines together, and yes I have installed Linux from floppies - does that make me not a sheep?

The 'no-one was ever fired for...' problem is alive and well, but it is not the explanation for every instance of a (oh horror) closed source commercial product dominating a nich market. Think about it.

One thing I didn't like about Postgres, was that there was no concept of users and passwords in it. Am I going to let X users share my machine in an environment where everyone can see everyone else's tables? I think not.

What are you smoking?

Postgres has all the regular GRANT options any RDBMS has, in addition to its Host Based Authentication in pg_hba.conf which allows you to trust, deny, ident, use kerberos or a regular password for any database it's in charge of!

Just because it doesn't have a 'vendor' doesn't mean its not supported. There are people who are just as knowledgeable about MySQL as others are about MS SQL Server or Oracle. Get your %99.999999 uptime contracts from them.

Major industry support? Like thousands of sites using it? I don't think that it can realistically be argued that MySQL is anything but fast and stable.

The big question is: Do you need transactions? If you do (and granted, most systems that involve money are going to want them) then MySQL won't work for you. If you don't, it will be fast, flexible, and free.

Before everyone starts attacking MySQL, and citing Oracle's technical merits, we should probably note one thing. First and foremost, Oracle has hideously high pricing. Currently, my company is looking into Oracle licenses to replace our PostgreSQL setup and the nice fellow at Oracle tells me about a "great deal" in which I'll be paying per clock cycle. I.e. 500mhz means 500 * (rate). This is since of course, web sites only use one user to access the database, making licensing fairly cheap. PostgreSQL on the other hand is probably the most "free" of the major RDBMS, provides next to all of Oracle's features, plus some unique features like 'snapshoting' as opposed to row or table locking, sequences and more.

All in all PostgreSQL, speed considerations aside, is a worthy option for people considering Oracle. The only reason most people consider Oracle is because of it's reputation. People hear "Oracle" and they believe you're taking your data seriously.

For speed, MySQL's only advantage is SELECT speed, as INSERT's are quite slow, in comparison. If you're running a website that's fairly non-interactive (like mine : Punknews.org [punknews.org]) You'll be happier with MySQL for speed, but if you're dealing with a lot of user-inputted Data, PostgreSQL will make up for the SELECT speed with a much faster, multi-threaded and non-table locking INSERT and UPDATE.

it isn't that much faster and sometimes it's slower, and it's not Open Source

While I haven't done any performance testing myself, others have found that it really is significantly faster for common tasks (e.g., simple selects for a website). And while it's not open source, it's close enough for most people. The source is available, and you the only restrictions on usage are if you resell the database itself.

[OT] BTW, why use the Harlequin RIP? It's not significantly faster than gs, and it's not Open Source. Not trying to be provocative, I'm just curious...

Although Ben Adida is correct in every fact about MySQL (lack of subqueries, poor ACID support, no foreign keys), his conclusion is a bit overstated.

The full RDBMS model is not needed by everyone. This is clearly the case for Slashdot and many other sites and companies that I know use MySQL. If you need transaction support, please don't be bitter than MySQL doesn't support it. It doesn't claim to.

Yes the speed of MySQL is nice, but the SQL interface is the selling point for me. I can model and access data more consistently from project to project because I don't have to worry about how low level issues (directory structures, multiple users, etc).

I could go on and on about both sides here, but let it suffice to say that there are no absolutes in software, only tradeoffs. MySQL lacks certain features of other DBMS's, but in many cases it may be sufficient to implement these features, such as transactions, rollbacks, etc., in application software rather than spending 100k/yr. on Oracle. However, beyond a certain point, the cost of implementing and maintaining these safeguards may exceed the cost of a DBMS that already includes them.

While "the best tool for the job is the one that works" is a good thing to work by, remember one little thing. A tool may seem like it's working until one critical point of failure. Kind of like using a 7/16 inch socket on an 11mm nut. Yea, it'll work most of the time, until you hit that one soft nut that you end up stripping. Then you sit there all pissed off that you didn't use the 11mm like you should have.

And this is exactly what the article was about. Being sure to evaluate the tools you use properly and safegaurding your valuable critical data as best you possibly can!

I'm guilty of having developped a complex website with MySQL, and to have been above its limits. Now I'm in some trouble. That being said...

MySQL is indeed a perfect database for 90% of the sites out there. And in the 10% of the rest, it's the perfect tool for 50% of the job.

Example? Do you need ACID to store user comments? I don't think so. Do you need Oracle to store logs that you wan't to query and agglomerate quickly? Certainly not. Do you need transactions to store access control list for HTTP authentification? Hardly ever! Would you want to spend $20k on a quad-proc, 512MB, raid enabled server just to be able to conveniently store dozens of thousands of user cookies? You must be joking.

All this is what MySQL does best and that other database will do poorly. I don't see, in these really life, heavily tested cases, where it fails in any way to fulfill the need.

In most cases, this won't be too much of a problem. Table locking will make the system clash when there is a few long complicated queries and lots of writes. Typically, in web based environment, this does not happen too much. In my case, for instance, I have very frequent updates on my 'cookie' table, which records the last request from each user, so has to be able to compute when to automatically log them out. I don't need any long, complicated queries on this one, just very quick one-line selects.

This will not work, obviously, for applications such as financials, where you need to query and update very often. But that's really not MySQL's target.

Same here. A long time ago, Monty logged into my machine and tracked the crash up to a weird unofficial patch I had applied to the Linux kernel... duh!!! I did'nt even remember it was there in the first place... I feel very silly in retrospect.

Neither MySQL nor Berkeley DB is a substitute for a real SQL database. Berkeley DB at least has a solid foundation that could be extended to become a full SQL database, should someone write the necessary code. MySQL, on the other hand, has fundamental deficiencies that are hard to correct.

I fail to see how Berkeley DB is any better than MySQL. MySQL's ISAM (and new MyISAM) storage structure is very well tested and very reliable - it's been in development for almost 20 years now! MySQL weakness is exclusively in its SQL frontend. It seems to be rather straightforward to extend, and it's programmed rather cleanly from what I've seen of the code (yes, I've looked at it). It also uses standard Posix threads, which is why it is very portable.

Part of it was the source code licensing issue, since postgres is free software (bsd license) and mysql isn't.

The other part is that while I was learning RDBMS's, I asked around about a couple of different ones, (I knew I wasn't using oracle both for the license and cost issues), and everybody seemed to have the same opinion - mysql is really fast, but it can't do this, can't do this, etc. etc. etc. and proceeded to spit out a long list of things that you had to work around with mysql in order to get the job done.

On the other hand, I've been using postgres quite happily for quite some time. For my applications, there isn't a serious speed difference between mysql and postgres, (in fact sometimes I think there is no difference) and it lets me march to the beat of my own drummer rather than building sometimes ugly constructs to get around the lack of features in mysql.

The $750 license fee you saw, is for 1 named user inside your organization. This is definately not a site license. This does NOT cover internet use. If you have 100 employees using the application, the fee is $75,000. For an internet application, for Intel, the price is $100 X sum of CPU speeds. I.E. For a 2 processor, 500mhz system, the price is $100 X 1000, or $100K. A risc system uses the same model, at $150 X sum of CPU speeds. 8I is simply the current version of Oracle. Oracle was going from 8.0, to 8.1, but renamed to 8i to jump on the internet bandwagon.

I'd love to be wrong, but this is from the Oracle Store, today: ------------------------------- Universal Power Unit Calculation The total number of Universal Power Units (see above for definition of a Universal Power Unit) needed is calculated by taking the MHz on each processor and multiplying by the total number of processors running the programs in allsuch computers. The Universal Power Unit computation includes the number of relevant computers, processors, MHz of each such processor and the relevant platform factors (Intel/CISC 1.0, RISC 1.5, and Mainframe 24).

Example 1: You are licensing the Oracle8iEE for a computer which has 2 - 400MHz RISC processors. How many Universal Power Units do you need?

(1 computer x 2 processors * 400MHz * 1.5 (RISC Factor) = 1,200 Universal Power Units for each program that you license for this computer. ----------------------------- List Prices are $100/Universal Power Unit for a perpetual license. It's $35/UPU for a 2 year license only.

I think it's because so many people have developed libraries and support code (in Perl/PHP) for MySQL- probably moreso than for PostgreSQL. Plus, big website players like Bugzilla and Slashdot make an impact on the newbie web developer making a decision on which DBMS to use.

I've been using PostgreSQL since Postgres95 (so... uhm... 1995) and haven't had any problems with it at all. It is actively maintained by an excellent group of open-source developers and, IMO, is VERY easy to administer.

> OTOH, take the article's argument and flip it on its head: a *real* enterprise RDBMS, like Oracle, will *never* be as fast as MySQL for web applications, precisely because it's worrying about transactions and rollback

Final answer? Would you like to call a friend? (Ok I promise I'll never use that gag again)

First off, MySQL only supports table-level locking, which means every last update to the table has to be serialized. Can you imagine if amazon had to serialize all its orders? Or a payroll system that had to process every paycheck sequentially on payday?

Secondly, in a real database, transaction logging can be turned off at the database level. And rollback/commit overhead simply doesn't happen if you don't enter a transaction in your update in the first place.

MySQL has no triggers or foreign key constraintsRight. Because nobody needs them. They obfuscate the code and are only essential in theoretical discussions about RDBMSs.

I've haven't heard such horseshit in ages, and again moderated "insightful", likely owing to the wonderful job done at HTML formatting the post rather than actual thought that entered into it. I'll give you a trivial example: timestamping. An author updates a document in storage, and the change is timestamped by a trigger, and perhaps an author worklog is timestamped as well. Whether from a client application in perl or VB or python or a direct SQL query, this triggered update gets done and done right. And that's a trivial example. Your evidence seems to be "I don't use it, therefore no one else needs it." It's the same reason linux doesn't have a programmatic transactional system to update the goddam passwd files in a 30,000 employee system -- because Joe's Web Design And Auto Body doesn't need it.

Oh but that can be done at the application level, right? Some of us prefer to declare what we want and let the system generate the code properly. If I wanted to do everything at the application level, I'd use a damn tape-driven turing machine.

MySQL just released a very nice book detailing its assorted innards with plenty of in-depth coverage. I've yet to find a book anywhere on PostgreSQL (though one is being written). Online docs are nice, but I need more info before switching from a known (no matter how feature-poor) to an unknown. One book would make a world of difference.

I've recently had the task of creating an in-house database driven app. By splitting it into a three tier system with a layer between the DBM (initially MySQL) and the web code, I have the option of swapping out SQL servers with only a fraction of the effort of rewriting the whole system for a new database.

It may not run as blazing fast as an app tied directly to the DBM, but I consider not being tied to a specific DBM to be a Very Good Thing. Does anyone else think this is a good idea?

The PostgreSQL user guide is supplied in the distribution as a postscript file which you can print on your local line printer; it runs to 216 pages and is extremely detailed. Of course, you don't get a glossy cover...

I like the acrobats better for online reading, myself. But MySQL's online docs weigh in at almost 450 pages and the book is about 750. Even adding the tutorial, admin and programming docs to the PostgreSQL total doesn't bring it that high:\

I'm seriously giving PostgreSQL a try, but I think having more available documentation - especially dead tree editions - would be a big help.

I am doing mission-critical stuff with mySQL. It started when I created the company's elabourate e-commerce web site, moving it from SQL Server on Windows to mySQL under Linux. When that proved to provide superior performance, we are now moving our contact management system (under test at this very moment) to mySQL.

I really don't understand this oddly religious love for ACID. Yes, you need it if you're a bank and must coordinate double-entry accounting transactions. But darn few things in the world really require double-entry accounting.

What we most assuredly need is a database that hangs together, working with minimal maintenance and bother, one that a single person can easily maintain without reading 10,000 pages of documentation.

If there is a power outage, the UPS fails, and our server dies, we might well have bits and pieces of a sales order in our database. We can handle that, because I set a flag in the main sales order file saying "this isn't done, ignore it", and when everyting's done, I set the flag to "looks OK, let's do it". In one sense, I'd really rather have some evidence of the order in our database, because then we can call the customer, apologise, and ask her or him to re-enter the order. Far better than leaving her or him in doubt.

Other than that, I see no compelling advantage to using a database with ACID. You might argue that my company could have afforded Oracle, and you would have been right. But I'd used mySQL on databases used for all sorts of stuff on my web site, and it had never let me down. I already knew the APIs and how to administrate a server, so I was ready to rock and roll on the new application. If I'd had to use Oracle or Postgres, it would have taken me a week or two of precious development time to get comfortable with the tool. This might have been acceptable if we weren't running on web time nowadays.

The bottom line is that I'm inclined to defend mySQL from its formalist detractors. We've used it, and in the month or so of hard use we've given it, it hasn't let us down yet. I might add that everyone loves the application, its responsiveness and my ability to make changes rapidly and efficiently. This is all because I used an environment I already knew well.

I should give a caveat based on the nature of financial transactions. I suspect that the original poster's database did not use double-entry bookkeeping principles. If you do, I'd argue for transactions based on the complexity of updates. But for a single-entry database with simple updates, mySQL will work just fine.

My system uses mySQL to connect to a Microsoft SQL Server system running an accounting program to do the double-entry stuff.

Is it just me (with my mininal familiarity with databases) or did anyone else keep getting hung up in this article's reference to ACID testing?

The OpenACS project refuses to break with the important principles of the ACID test.

I mean, I hear Acid test, I think of the Electric Kool-Aid variety. What I end up with is this image of a bunch of hackers all flipped outta their gorges, with a couple of guys from OpenACS just shaking their head that the MySQL guys can't handle the overstimulation...

In Mysql you have an application level lock on arbitrary string. Which is something with higher granularity then transactions. And faster for the matter. It also allows you to use the database for application level locks when no SQL is actually being done.

So from your statement I guess you suffer from the well known acute lack of ability to read syndrome. Go read the fsck^H^H^Hine manual and than make statements about something you do not understand.

One more ignorant idiot that thinks that he/she knows everything... Sigh....

10th time in a row: Mysql has arbitrary string lock. If you want lock on a record than do an application level lock on a string with a generated record name. Read the fscking manual... And it has had it for years...

One more idiot. It looks like today slashdot will be vanity fair 2. Read the fscking manual.

MySQL has no stored procedures. Absolute lie. It can load.so objects and has a known api. Yes you cannot do a macro define and store it. You can do something that is better - place a code on the server to do the job

MySQL only has table-level locking. I am getting sick of shouting. Mysql has arbitrary string locking, has had it for years, it even has non-blocking mode (you can stall on such lock or wait an arbitrary number of seconds). Go RTFM

As a professional IT consultant working for one of the top names in the software industry I am working on a detailed report into the "open source" phenomenon (thanks to various people for pointing out that it is not freeware per se) as started by Linus Torvalds with his Linux operating system some six years ago. I browse this forum for insights into the Linux user and developer communities.

Allow me to correct you a bit further. "Open Source [opensource.org]" is essentially a marketing program (on practical grounds rather then idealogical) for "free software [gnu.org]". Richard Stallman started the free software movement when he started the GNU Project [gnu.org] in 1984 with the aim of creating a freely-distributable reimplementation of Unix.

Linus Torvalds significantly popularized free software with Linux (which he started in 1991). Linus provided the last missing piece of the hitherto incomplete GNU system -- the kernel. It's a critical component, but bear in mind that without the prior work of the GNU project, Linux wouldn't be where it is today.

The "Open Source" movement was created in response to Netscape's announcement in January 1998 that they would release the source code to their browser. The relabeling has been very effective, as can be seen in events of the past two years. Netscape's source code release has been viewed by some as less successful, but Mozilla [mozilla.org] is alive and well, and has made very significant progress since the original source code release. Jury's still out on this one; my personal belief is that Netscape will be making a comeback in the browser marked based on the Mozilla efforts.

Anyway, my question is, is there a fully-featured open source RDBMS out there? Your help is appreciated.

You might want to check out PostgreSQL [postgresql.org]. It's an object-oriented RDBMS with SQL support as well as transactional integrity. It used to be considered only suitable for academic use, but much work has been done in the last 5 years, and from what I've heard it's one of the most solid free databases out there...

I believe MySQL is so popular because it's simple, really fast, and very easy to work with. And not everybody really understands about the database integrity dangers inherent in MySQL's approach, or why they should even care.

I've often wondered if the MySQL frontend could be merged with a Berkeley DB backend... (There is an older version of MySQL that has been re-released under the GPL...)

Berkeley DB already supports transactions and ACID properties. MySQL doesn't. That makes Berkeley DB more solid at the low level. It doesn't support the high level. That's why neither is a complete substitute for Oracle.

This is very good news, although I didn't see any mention of transactions and journalling in that press release, only database replication. (Of course, to do proper replication, you should have distributed transactions...)

So how feasible is it to combine MySQLs SQL parsing and query processing with Berkeley DBs storage layer?

As I mentioned in another posting, I've been wondering this for a while now. You would obviously have to start with the MySQL version that was released under the GPL; I believe that Sleepycat allows free redistribution of the Berkeley DB code if the source code of the application is distributed, as the GPL would require. This may mean that Sleepycat would allow Berkeley DB to be distributed under the GPL? (Might be worth asking them...)

As for how hard it is, I have no idea. I don't know how clean the interface is between the frontend and the backend, or if there's even a clear boundary at all. Berkeley DB has a clean interface, but I don't know if it would be easy to plug it into MySQL.

That link has nothing to say about client/server. The real answer is that Berkeley DB doesn't do client/server operations natively because that's at a higher level than Berkeley DB is designed for. It's intended as an embedded database, usage of which is determined by the application it's embedded into. It's not designed as a generic database server.

It's straightforward to make an application that is a TCP/IP server frontend for a Berkeley DB database backend. I've done exactly that, and it works fine. In fact, the code that I wrote for this purpose has been in use at a regional ISP for over 50,000 users to handle authentication information for RADIUS and Email...

I would never trust the master copy of any critical data to MySQL. It's a nice little database, simple and very fast. However, without transactions or journalling, there's no way to trust completely that the data is safe. I would use MySQL as a high-speed cache, but not as a master repository; you may need to rebuild the database at any time if it becomes corrupted.

Berkeley DB [sleepycat.com] is one of my favorite databases, because it is also small and fast. Unlike MySQL, it does support ACID properties, transactions, journalling and rollback, etc. However, it doesn't support a schema layer or a SQL processor. It's designed for embedded applications.

Neither MySQL nor Berkeley DB is a substitute for a real SQL database. Berkeley DB at least has a solid foundation that could be extended to become a full SQL database, should someone write the necessary code. MySQL, on the other hand, has fundamental deficiencies that are hard to correct. (On the bright side, that's why MySQL is really fast.)

Whether a database supports transactions or not has nothing to do with how reliable it is. Data loss and crashes occur because of bugs in the database code, misconfigurations, disk crashes, and bugs in the application code.

Transactions are a specific feature to ensure consistency when multiple updates need to happen atomically. The requirement for transactions is driven by the task you are trying to solve. If your task doesn't need multiple updates to happen atomically, you don't need transactional support. Even if your task does require transactions, having them built into your database is only one of many options for implementing them. The transactional needs of most web applications are very simple and can often be much more efficiently implemented in the application code on top of a database that does not have transactional support.

I suspect most web sites actually get more reliability out of MySQL than they would out of Oracle (and better performance to boot). MySQL is much easier to configure, it's a much smaller and simpler software system, and it has very straightforward logging facilities that you can use for replication and recovery. All of those properties remove a lot of the risks that really cause data loss.

That isn't to say that Oracle or some other transactional database is never the right choice. For the applications those databases are designed for, business and financial data, their feature set are appropriate and their poor bang-for-the-buck is less important. On the other hand, for really high performance web applications, even MySQL is way too slow, and developers have to roll their own.

I think this is actually quite analogous. Oracle is a huge, complex piece of software. If you manage to figure out how to administer it properly, you may get both good performance and reliability out of it. MySQL has many limitations and clearly isn't suitable for all applications, but it does what it does pretty well.

I think the database situation is far from good, either in the commercial world or in the open source world. I suspect SQL itself is in part to blame for this state of affairs. Ultimately, relief will likely come from a paradigm shift, perhaps to more API-centric approaches (like MetaKit) or to persistent object stores.

Transactions are about making multiple updates atomic. A database can be perfectly reliable and robust without supporting supporting that feature.

Transactions also don't come in a single flavor, they come in different levels with different guarantees.

And while "if the transaction didn't complete, immediately undo everything" is nice, it's often unnecessary. Many applications can be implemented by adding information into the database and activating it with an update to a single row in a single table (which is atomic in most databases even without transactions). The main area where that doesn't work is if you do global operations on tables, like taking a financial total; but that need doesn't usually arise in systems like ACS.

The transactional problems most web based sites have to solve are much simpler than the transactional problems that an enterprise level financial database has to solve. OTOH, performance and cost matter a lot more for the web site.

As for the Western movie analogy, it's a good one. If you aren't quick enough against a good opponent, you're dead. It's better to take your chances and draw quickly even if you can't hit as accurately. The same is true for a lot of business decisions.

I would be more interested in Postgresql except there is this really irritating postbot that with deadly regularity pops up and says in a colorless monotone, "Use Postgresql. It tastes good. It is better for you."

What's curious is that the bot never seems to talk about its actual real-world projects using Postgresql. I would love to hear more about that, actually. It's only from comparing our experiences that we learn how to do our work better.

This sounds reasonable until you apply actual experience to it. Most databases in most circumstances are primarily read-oriented. If I were to put very broad numbers on it I would say 80% read, 10% write and 10% processor-bound. There may be a few very active tables where writes dominate (for example, a cookies table for a web site or a state table in a logistics app), but this tends to be limited to one limited area of the database. In general, such active tables will always be in memory anyway (or should be to the extent possible, if you can't manage to add more memory perhaps a redesign is in order since the cost of going to disk especially for frequent writes is prohibitive compared to in-memory access). Data integrity features such as SQL COMMIT/ROLLBACK on table writes may be indicated when the risk of data loss is high, but don't forget that you pay a performance and overhead price for having these features active. That is to say, data integrity features are desirable but not at an indefinite cost. It is important to remember that in databases, as in all programming, nothing functions in complete isolation. The presence or absence of a given feature does not trump every other consideration, and besides that, sound design generally is the most important, least understood factor within your control for making any database work properly. -------

I think the ArsDigita folks would be a little more convincing if they considered intermediate solutions instead of the extreme ends of the spectrum. I'd (personally) never trust MySQL with on-line transaction processing, or anything involving non-trivial amounts of money. On the other hand, I wouldn't seriously consider a massive system like Oracle unless I was dealing with vast numbers of transactions or large amounts of money. On the gripping hand, most of my DB experience has been with middle-of-the-road systems like Sybase and (ghod help me) MS SQL, which may not provide the full power and reliability of Oracle, but much more flexibility and stability than MySQL. It's a matter of fitting the needs of your project. Now, I am considering using MySQL for a personal project involving web page generation; I might consider Postgres after reading some of the remarks here.--

Every time this discussion comes up (MySQL vs. Oracle, etc.), everybody focuses on the lack of certain features (transactions being the main one, others usually include replication, triggers, stored procedures, and journaling).

The soapbox I always climb on is that all of those features are still available, just not at the RDBMS level. Between 1985 and 1996 (roughly), I was writing client server applications with tools not nearly as powerful as we have today -- to work with ISAM, dBase/Foxpro, and B-Trieve type tables. I've created modules that did database replication, transactions, server-internal processes called by client apps (doing the same kinds of things that stored procedures and triggers), and I would wager that all of my code which did those kinds of things looks a hell of a lot like the code which does those things inside Oracle, DB-2, Informix, etc.

In more simple terms, here's the progression inside the database:

A certain type of data request is made for which a trigger has been set

When the "trigger" is pulled, i.e., that type of data request is made, code within the database is launched

Code within the database performs some useful function, which may include actions which cause other triggers to be fired, etc.

Code within the database returns values as defined by the trigger/stored procedure.

Okay, (as an example), you want to do the same thing with a web server enabled with PHP and a MySQL back end? Here goes:

PHP code on the web server tests for a certain type of data request.

When that data request is made, the web server calls another PHP module which has been programmed to do the same thing the stored procedure did.

This new code performs some useful action, which may include calling other modules to do useful functions

These functions interact with the database, doing the same things their stored procedure counterparts would do, an return data/return codes to the calling program.

What are the differences?

First, PHP is obviously not as fast as the PL/C code inside an Oracle database, etc.

Second I (the programmer) have to work a little bit harder to make things work efficiently, and

very importantly, I only incur the processing overhead for "stored procedures", in the specific application modules that need it -- not throughout the database.

Which is why MySQL is my database of choice when I have one for damn near every project I do, including some large, enterprise size data sets. The key is, knowing when the other features of Oracle, SQL Server, etc. are more critical to the client than outright blazing speed for the majority of the database work to be done.

Yes, there's a good reason for it -- virtually all IT managers are mindless sheep.

And many DBAs... I run a mission critical app in MySQL, and the rest of the Enterprise runs Oracle. Very heavily supported Oracle. Very big Enterprise.

Now, I'm no fool -- I know the limitations of MySQL, and work around them... I have apps that maintain a journal in extra cycles so I can run a full restore, etc... all at the app level.

But the point is this: MySQL has not failed me in the 2+ years that I've used it, while the larger site/service has repeatedly gone down over and over again due to Oracle dying. The people who admin the Oracle DBs are very capable and well trained... but it's gone down nonetheless.

When ditchdiggers cut power, and the UPSes ran for two hours, and the generator failed to kick in, there were lots of dead Solaris boxes, the Oracle database require half a day to be rebuilt, and my three VA Linux boxes (and the others that don't directly relate to this project), all came up when power was turned back on. Apache+PHP loaded, cron jobs were run, machines synced, and MySQL ran without a hitch.

Oracle has gone down only a handful of times in two years. MySQL's downtime has been zero while at least one server has been running.

It seems like Dude's biggest gripe is that MySQL is claiming to be an RDBMS when it's not, but I've never found the documentation on the MySQL web site to be particularly misleading and always thought that they are quite open about the areas that MySQL is lacking.

I think other people have found the MySQL side of the story less than completely frank. A big reason for this is their infamous benchmark collection, [mysql.com] where they compare apples and oranges. (To drag this metaphor out a bit, they also seem to suggest that since their apples are sweeter than most oranges that it's okay that they are a poor source of vitamin C since most foods are a poor source of vitamin C...)

I have news for you: "performance" usually means "speed". As in "fast", which is usually measured by "benchmarks", which do fulfill all the requirements for a "test". One of these benchmarks, in which PostgreSQL loses pitifully to MySQL is here.

Glad you mentioned this stuff. There are many problems here.

On some of the comparisons listed, the benchmarked versions of PostgreSQL are very old, dating back to 6.3.

On the first comparison listed, the version of Postgresql is less old (but not the newest), while the version of MySQL is an alpha version.

Some of the tests where there is a big difference between the two products are testing precisely things where MySQL's lack of ACIDity would bite you hardest if anything went wrong (like deleting big or many entries, updating stuff...you get the picture.)

Some of the tests that show the smallest differences (or advantages for PostgreSQL) are the things that MySQL proponents claim are more important anyway: many kinds of selects. In particular, look at "select_join".

So, what do you really think these prove? Not much in my book. OK, here's one: if you take a database that doesn't do transactions and compare it with one that does, then certain transaction-related tests are going to favor the former. Wow. Like that's big news.

If your definition of "critical" doesnt require transactions and all the fancy stuff, but does require very good performance, then MySQL is great. If your do require full ACID, then you'll just have to pay the price of one of the bigger players.

Well, as the article referenced by this thread suggests, some of "the fancy stuff" missing in MySQL includes real triggers and things that would actually speed up many MySQL web apps.

More importantly, you don't have to buy Oracle to get ACID; just get PostgreSQL [postgresql.org] instead. Open-source, and with a pH value only a fraction of a unit away from pure ACIDity.

Moderators: this post is only redundant if you assume that people have read the article; that's all I'll say.

10th time in a row: Mysql has arbitrary string lock. If you want lock on a record than do an application level lock on a string with a generated record name.

Bingo. In your own words, you are describing a record lock implemented at the application level. Meaning: you've got to work your own code if you want to hack around the lack of record level locking.

Ah, I think I've had an insight.

It might be the case that some people don't appreciate the difference between declarative programming, where you just state the desired result, and procedural programming, where you have to do much more of the work yourself.

The whole relational model of databases, as described by people like Chris Date, is a nice fit for declarative programming. In reality, things are not so nice, and they involve some very ugly things like the dark corners of SQL. BUT you should never have to go in and try to figure out how to do stuff like row-locking.

This is not only because you might well screw it up, but because your problem involves coming up with an appropriate design for your database, not worrying about how little fiddly things actually get implemented. Yeah, that's an ideal, and it doesn't get reached very often, but you don't go out and design a system that's going to require a bunch of non-declarative code just to get things up to speed.

Go ahead and implement your own row-locks or transaction logs. Nobody is stopping you. But don't go claiming that your need to hack doesn't expose a weakness in the system.

True, but their attitude bothers me. They point out the shortcomings, and basically suggest no one use it, when in fact mysql is a great database for it's intended usage. They can't seem to understand that MySQL fits a niche that no other database does which is why the MySQL folks have made feature compromises for performance.

Do these people think they are really telling people something new? I'm really somewhat offended by the attitude they use, like MySQL is fooling people or lying about what their product does. It's meant to be fast and simple. Period. It's not meant to handle data integrity, transactions, and other advanced features. They seem to feel that this is a fault whereas most people will agree that this is a feature because when you don't need all that stuff and you really just want something fast to do queries (like a dynamic web page...), MySQL is the hands down winner.

I don't think it's true that MySQL's only advantage is price. The problems I've seen (at least with Oracle, DB/2, and Sybase) is that they have a lot of overhead. They consume a lot of memory for openers. If you really don't need things like transactions, something like MySQL is probably going to be faster.

In my opinion, MySQL has it's place and Oracle has it's place. It's simply a matter of using the right tool for the job.

I work for a relatively small insurance company who has done some work with mySQL. We have a web site from which our field agents run reports written in Perl. Typically the database will have around 150,000+ records

It is fast. Very fast. mySQL does exactly what I need it to do, spit back data quickly for those agents on dialup internet connections.

In the 384 days (and counting) that we've had mySQL up and running, there has never been a corruption or crash. So yes, it is a stable and reliable database.

The important thing is that it works for us. If you are selling stuff online, maybe you should look at something else, say PostgreSQL.

I guess if you don't consider the GPL to be an open source license, then your statement makes sense. While TcX doesn't release the current-stable series under the GPL, they release older generations under the GPL. You can get MySQL 3.20.32a under the GPL right now. When 3.23.x becomes the official stable series, I'll bet you'll see 3.21.33 be GPLed.

TcX has been developing MySQL for years without "raising the price." Even if they did, so what? You still get the source, and they still publish older stable versions under the GPL.

As for PostgreSQL... Until it becomes *stable*, *fast*, and supports *outer joins* I find it an unworkable solution.

If we were to implement something that required more than a few updates per minute, we would look into PostgresSQL or Sybase but as it stands now MySQL is the hands down winner.

My company's benchmarks indicate that MySQL is faster for inserts than DB2 (widely regarded as being a bloody fast database) by a factor of 30x. Higher if you want to avoid table corruption in DB2. (we had to turn journaling off to get the best performance)

This sounds like an application that could be better handled by LDAP instead of an SQL database.Hogwash.

I have a suspicion that the people who safely use MySQL would be better off using LDAP, because their applications are heavily oriented towards readsActually, our application is heavily oriented towards *writes* which is why we use MySQL. 30x faster for our situation than the best alternative. (DB2)

We did use LDAP at one point, but performance and scalability were non-existent. We had to scramble to move to MySQL when we started getting around 50,000 page views/day -- the LDAP system couldn't handle it. Now, with MySQL we handle 2,000,000 page views a day, with several products instead of the one product we had at the time. (web based e-mail client)

yeah, but how much does mySQL slow down if you want to avoid table corruption? oh wait, unlike DB2, you can't avoid it.

I've NEVER seen unrecoverable table corruption in MySQL. In the DB2 instance, a table would be completely obliterated if the script accessing the table crashed before it disconnected the database handle. Oops. No ability to recover AT ALL, thanks to the fact that turning the journal off is what caused the instability.

Fast and decently reliable is good enough for some tasks, but for almost all non-web, and a good bit of web applications, MySQL is a square peg in a round hole.

I agree that for many non-web tasks MySQL is not well suited. At least for so-called "enterprise level" stuff. MySQL can be used to fill smaller niches even in that arena though.

I've only ever been involved in one web application where MySQL was not well suited, and that application classified as a traditional "data mining" type application.

MySQL's holes are certainly annoying. Think of what these three mean, for instance:

No subselects

No updates or deletes on a query with a join

No views

This means that if you want to delete/update in one table based on information in another, there is no option but to do it programmatically (rather than with just a fixed series of SQL statements). Another example of this problem is that MySQL doesn't let you insert into the same table you're selecting from, again requiring fiddly and time-consuming workarounds that just shouldn't be necessary.

Despite it's many short-comings, MySQL fills an important niche--there are no other DB servers I know that provide reasonable SQL capabilities while compromising on 'ACIDity'. Yet this compromise is actually ideal in many situations... Take Slashdot for instance, where the occassional lost comment is acceptable for an enormous speed increase. It does mean being more careful with your coding to not end up with orphaned records, but on the other hand the inefficiency of Oracle/SQL Server/etc in having to check for transactions, triggers, referential integrity, etc on every update is not always ideal.

I do find the Open Source/Free Software community's infatuation with MySQL most perplexing though (like the authors of this article). It's neither free speech nor free beer (unless you use it for no commercial purpose--even if you just consult you have to purchase it).

My dream database for non-critical systems (like running web discussion boards!) would take the same design premises as MySQL, but provide the same power and flexibility as MS Access and SQL Server. Yes, I said the M word! MS's databases provide almost no obstruction to using SQL any way you like... Updates and deletes can be done on pretty much any query where it makes logical sense (e.g. not on a group by!), and complex selects can be written with highly nested views and easy to write VBA user-defined functions. Furthermore, their linkage with very easy-to-use OLAP tools (OLAP Services and Excel PivotTables) is extremely useful when you want to get stuff out of your DB, rather than just put it in.

There is a database which is free, which has high performance, which is reliable, which runs very well on Linux, which is reasonably easy to secure, which supports proper transactions with proper commits and rollbacks, which has a genuinely open-source license.

It's called PostgreSQL [postgresql.org]. It outperforms MySQL on just about any test you care to name. Get it.

I'm sure there is going to be a lot of slamming of the "mySQL is quick and it rocks on site X" variety. But the points raised are valid. There are certain things that are required for RDBMS to be worthy of the name. One such element is transactions, when maintaining critical data it is paramount to ensure that an adverse changes can be removed when the change is found to be invalid. There are others but in most of the systems I've worked in this has been the key requirement, to bunch up a load of calls which are interspursed with coding logic and at the point where you reach a magic if statement that determines that something has changed and now the request is invalid to remove all traces of that request.

"It isn't enough to be quick, you have to be accurate" as many a Western movie has pointed out. mySQL is a fine system for maintaining data that can be the right choices in certain instances. But don't go betting the company on it handling your business logic.

I've used MySQL in a "mission critical" position dealing with a datbase of over 20 million records. It also grows by about 20,000 records a day.

This particular system is still used by a large UK ISP to calculate on-line charges and consequently needs to be reliable. It is. It is also more accurate than a well known telco when it comes to calculating charges.

It stays up, doesnt get corrupted, run out of extents or rely on a 90 UKP/hour support person to come and attend to it every month.

Every single point made in the article is questionable. If I could be arsed I would write a response...but nowadays I let people get on doing their own thing rather than try to persude them otherwise. If the world wants to run windows, MSSQL or whatever let it. When it all comes tumbling down, I and all of the other Linux users in the world still smirking over 'ILOVEYOU', will still be tapping away and feeling content.

In a nutshell, Mysql: Is really fast, Is very very stable, Has excellent repair tools should anything go wrong, Is totally reliable.

OK it doesnt implement full SQL but so what ? When you write a system in perl/c/php you get around these minor problems and end up with queries that run about 100-200% faster than the oracle equivs. Subqueries ? Pah!

I'm not against oracle/informix/sybasen't write- off MySQL just because its free. It works and in my experience is a lot less hassle than running oracle.

Everyone knows mysql isn't a real RDBMS and no one in his right mind would use it for a real web site. In fact, we don't really know if it would scale to a large web site capable of serving, say, 600,000 to 800,000 pages on a busy week day (Or less on weekends.) The only people we know who are actively using MySQL are little old ladies storing recipes.

Something that kills me from some of this software is that their wasn't enough research. The Linux kernel was HEAVILY criticized (the Tennenbaum thread) because it was obsolete... this also applies to mySQL, and both criticisms are correct!

But Linux and mySQL are fast and popular. Windows 9x is popular too, your point? NT's IIS scores well on benchmarks and is fast, your point?

Popularity and speed are no excuse for correctness. Linux went through several MAJOR revisions (until 2.2 I would argue) to compensate for a bad initial design.

No disrespect for Linus, it was an impressive one man operations, and all the hobbyists did an impressive job, but there is no exuse for writing incorrect code.

We CONSTANTLY criticize Microsoft for intentionally doing things in a dumb way. Why do we do the same?

Operating System theory has solved many of the problems plaguing Linux, and solved them 10+ years before Linux was began. All the "modern" work on the Microkernel, etc, and we still don't have it? Even NT has a microkernel architecture, and they were able to support 4 platforms with minimal effort, they dropped to one for marketing, not technical reasons.

Linux supports many platforms, but at what cost? Each port seems to be such a big deal that we make a big deal out of it. If Linux had moved to a microkernel, it would be on more platforms than FreeBSD, but I digress.

In my architecture class (forget product development, and undergraduate class) we discussed atomic operations and how to provide for them. This was a very difficult thing and took a lot of time to work out. In fact, real solutions weren't really available until the late 70s.

HOWEVER, there is no excuse for pushing a database in the late 90s/early 00s without atomic operations.

Speed is NOT important. Processing power is SO cheap that saving cycles is irrelevant, especially for a constant different. There is no order of growth problem, it might take 2-3 times as long, that's irrelevant, but more horsepower.

Benchmarks that show less than an order of magnitude difference are irrelevant. If you need 3 times the power, get a 4 processor system. Do NOT sacrifice correctness to save money.

If you are doing REAL database work you need the atomic transactions and transaction rollbacks.

If I am running a bank, I need to make certain that the money is taking out of account 1 and placed in account 2 at the same time. It is not okay to remove it from 1 and if it crashes, oh well, it disappears.

In Slashdot's case, maybe it's okay, who cares if a post or two are lost. But if you are running a real site dealing with e-commerce, etc., you need to worry about it. Perhaps for discussion forums this doesn't matter, but for anything more serious, it does.

I find it interesting that there are so many obvious mistakes made in free software. The GNU stuff is solid, but that is written by academics who want correctness before anything else. RMS didn't care if EMACS was 10 times as big and 1/4 the speed if it worked all the time. mySQL takes the Microsoft approach: "yeah, it isn't correct, but few people will see the problems so they don't matter."

It is not relavent if Linux based systems are more reliable than Windows 98. EVERYTHING is more reliable than Windows 98. (but Windows 98 is a market leader, waah, waah, waah) So what? Windows 98 is used for the ease of use and available applications.

Linux to compete on the desktop needs to beat them.

For Linux to win on the server (not Web servers, REAL servers) market, it needs to be better than the competition.

Being better than NT/IIS will allow Linux/Apache to fight in the low end, not MS. However, running a Linux system is as complicated as a Sun system. So why use Linux instead of Sun? If your only reason is cost or availability of source code that you've disputed the "Cathedral and the Bazarre" arguement.

If the reason is that it is free (like RMS argues for) than you've made a wonderful personal decision, but STOP TRYING to argue that businesses should make it, they don't care about freedom, they exist to make profits.

I think that the slashdot community (although I hope it is limited to here and not the rest of the OSS/FS community) needs a REAL wakeup call.

Instead of comparing yourself to mediocre systems, start doing things right. Instead of suggesting that you are good enough (the way MS does), aim to be correct. Instead of championing the uneducated hacker mentality, do a LITTLE research. You don't need a Ph.D, but you can read 5-10 articles on the system design by academics before you design your system, can't you? Invest 10-50 hours in research and do things RIGHT, instead of wrong.

If your persist in doing things wrong, then continue comparing yourself to Microsoft. Microsoft and some of the OSS stuff follow the same mantra: get it done, mostly working, and who cares if it is correct.

This MAY produce large gains in the marketplace, but it guarantees that we will join Microsoft in holding back the computer industry.

The article in question is quite bad. It appears as if it is trying to make a point just by throwing about the term "atomicity" and quoting someone else. The implications made by the author are that MySQL is likely to lose data or be inconsistent. That is completely false.

Using MySQL is like using any other database engine in AUTO COMMIT mode. For heavy read requirements or simplistic data models, the fact that you cannot package multiple database accesses in a single transaction is irrelevant--you simply do not need to.

The one assertion that is true is that MySQL is not an enterprise database engine. It neither claims to be one nor strives to be one. But that hardly renders it, as he calls it, a file system with a SQL front end.

If you're doing transactions that matter, which includes anything involving money, you need a database with atomic transactions that can maintain database correctness over system crashes.

If you're not doing transactions at all, which is common, some dumb database like Jet [microsoft.com] is probably sufficient.

MySQL is somewhere in the middle. That's is the problem. It looks superficially like a transaction-oriented database, but isn't. The open source community needs a real transaction oriented database to play in the mission-critical server world, and MySQL doesn't seem to be there yet.