Drizzle

I was swamped with registrations for the online contributor tutorial for Drizzle, and so I’ve bumped up my account to a DimDim Pro account. This means two things:

I can take >20 registrations

I can record the session

So, Diego, rest assured, the session will be recorded (hopefully with no glitches). I’m going to call DimDim to see if I can do a practice recording beforehand to verify Linux64 is a platform they support for recording (if not, I’ll go to my neighbour’s Windows computer to record)

Again, if you’re interested in the webinar, please do register using the widget below:

So, last year, Drizzleparticipated in the Google Summer of Code under the MySQL project organization. We had four excellent student submissions and myself, Monty Taylor, Eric Day and Stewart Smith all mentored students for the summer. It was my second year mentoring, and I really enjoyed it, so I was looking forward to this year’s summer of code.

I have been absolutely floored by the flood of potential students who have shown up on the mailing list and the #drizzle IRC channel. I have been even more impressed with those students’ ambition, sense of community, and willingness to ask questions and help other students as they show up. A couple students have even gotten code contributed to the source trees even before submitting their official applications to GSoC. See, I told you they were ambitious!

This year, Drizzle has a listing of 16 potential projects for students to work on. The projects are for students interested in developing in C++, Python, or Perl.

If you are interested in participating, please do check out Drizzle! For those new to Launchpad, Bazaar, and C++ development with Drizzle, feel free to check out these blog articles which cover those topics:

Today I pushed up the initial patch which adds XA support to Drizzle’s transaction log. So, to give myself a bit of a rest from coding, I’m going to blog a bit about the transaction log and show off some of its features.

WARNING: Please keep in mind that the transaction log module in Drizzle is under heavy development and should not be used in production environments. That said, I’d love to get as much feedback as possible on it, and if you feel like throwing some heavy data at it, that would be awesome

What is the Transaction Log?

Simply put, the transaction log is a record of every modification to the state of the server’s data. It is similar to MySQL’s binlog, with some substantial differences:

The transaction log is a plugin[1]. It lives entirely outside of the Drizzle kernel. The advantage of this is that development of the transaction log does not need to be linked with development in the kernel and versioning of the transaction log can happen independently of the kernel.

Currently, there is only a single log file. MySQL’s binlog can be split into multiple files. This may or may not change in the future.

Drizzle’s transaction log is indexed. Among other things, this means that you can query the transaction log directly from within a Drizzle client via DATA_DICTIONARY views. I will demonstrate this feature below.

It is important to also point out that Drizzle’s transaction log is not required for Drizzle replication. This probably sounds very weird to folks who are accustomed to MySQL replication, which depends on the MySQL binlog. In Drizzle, the replication API is different. Although the transaction log can be used in Drizzle’s replication system, it’s not required. I’ll write more on this in later blog posts which demonstrate how the replication system is not dependent on the transaction log, but in this article I just want to highlight the transaction log module.

How Do I Enable the Transaction Log

First things first, let’s see how we can enable the Transaction Log. If you’ve built Drizzle from source or have installed Drizzle locally, you will be familiar with the process of starting up a Drizzle server. To review, here is how you do so:

cd $basedir
./drizzled [options] &

Where $basedir is the directory you built Drizzle or installed Drizzle. For the [options], typically you will need at the very least a --datadir=$DATADIR and a --mysql-protocol-port=$PORT value. For an explanation of the --mysql-protocol-port option, see Eric Day‘s recent article.

To demonstrate, I’ve built a Drizzle server in a local directory of mine, and I’ll use the /tests/var/ directory as my $datadir:

Now let’s start up the server, this time passing the --transaction-log-enable and the --default-replicator-enable options. The --default-replicator-enable option is needed when the transaction log is not in XA mode (more on that later):

Let’s see what each of the views tells us about what is in the transaction log. Remember, we’ve executed a CREATE SCHEMA, a CREATE TABLE, and a single INSERT. Here is what the TRANSACTION_LOG view shows:

The column names should be self explanatory. The FILE_LENGTH shows the size in bytes of the log (which matches the output we had from our ls -lha above.) The INDEX_SIZE_IN_BYTES is total amount of memory allocated for the transaction log index.

The TRANSACTION_LOG_ENTRIES view isn’t that interesting at first glance:

You might be tempted to ask what the heck the purpose of the TRANSACTION_LOG_ENTRIES view is for. It is a bit of a bridge table that allows one to see the type of entries at each offset. Currently, the only types of entries in the transaction log are of type TRANSACTION — basically a serialized GPB Protobuffer message — and a BLOB entry, which is for storage of large blob data.

The TRANSACTION_LOG_TRANSACTIONS view shows all the transaction log entries which are of type TRANSACTION:

As you can see, there is some basic information about each transaction entry in the log, including the offset in the transaction log, the start and end timestamp of the transaction, it’s transaction identifier, the number of statements involved in the transaction, and an optional checksum for the message (more on checksums below).

Viewing the Transaction Content

While the above view output may be nice, what we’d really like to be able to do is see what precisely were the changes a Transaction effected. To see this, we can use the PRINT_TRANSACTION_MESSAGE(log_file, offset) UDF. Below, I’ve added two more rows to the lebowski.characters table within an explicit transaction. I then query the DATA_DICTIONARY views using the PRINT_TRANSACTION_MESSAGE() function to show the changes logged to the transaction log:

You may notice that NUM_STATEMENTS is equal to 1 even though there were 2 INSERT statements issued. This is because the kernel packages both the INSERTs into a single message::Statement::InsertData package for more efficient storage. If there had been an INSERT and an UPDATE, NUM_STATEMENTS would be 2.

Enable Automatic Checksumming

One final feature I’ll highlight in this blog post is an option to automatically store a checksum of each transaction message when writing entries to the transaction log. To enable this feature, simply use the --transaction-log-enable-checksum command line option. You can view the checksums of entries in the TRANSACTION_LOG_TRANSACTIONS view, as demonstrated below:

DDL is not Statement-based Replication

As a final note, I’d like to point out that even DDL in Drizzle is replicated as row-based transaction messages, and not as raw SQL statements like in MySQL. You can see, for instance, the message::Statement::CreateTableStatement inside the transaction message which contains all the metadata about the table you just created.

I tried commenting on Maureen’s article on their website, but the login system is apparently borked, at least for registered users who use OpenID, which it wants to still have a separate user ID and login. Note to sys-con.com: OpenID is designed so that users don’t have to remember yet another login for your website.

Besides having little patience for content-sparse websites that simply provide an online haven for dozens of Flash advertisements per web page, the article had some serious problems with it, not the least of which was using large chunks of my Happiness is a Warm Cloud article without citation. Very professional.

OK, to start with, let’s take this quote from the article:

Drizzle runs the risk of not being as stable as MySQL, because the Drizzle team is taking things out and putting other stuff in. Of course it may be successful in trying to create a product that’s more stable than MySQL. But creating a stable DBMS engine is something that has always taken years and years.

This is just about the most naïve explanation for whether a product will or will not be stable that I’ve ever read. If Maureen had bothered to email or call any one of the core Drizzle developers, they’d have been happy to tell her what is and is not stable about Drizzle, and why. Drizzle has not changed the underlying storage engines, so the InnoDB storage engine in Drizzle is the same plugin as available in MySQL (version 1.0.6).

The pieces of MySQL which were removed from Drizzle happen to be the parts of MySQL which have had the most stability issues — namely the additional features added to MySQL 5.0: stored procedures, views, triggers, stored functions, the INFORMATION_SCHEMA implementation, and server-side cursors and prepared statements. In addition to these removed features of MySQL, Drizzle also has no built-in Query Cache, does not support anything other than UTF-8 character sets, and has removed the MySQL replication system and binary logging — moving a rewrite of these pieces out into the plugin ecosystem.

The pieces that were added to Drizzle have mostly been done by adding plugins that provide functionality. Maureen, the reason this was done was precisely to allow for greater stability of the kernel by segregating new features and functionality into the plugin ecosystem, where they can be properly versioned and quarantined, therefore increasing kernel stability. It’s pretty much the biggest principle of Drizzle’s design…

The core developers of Drizzle (and much of the Drizzle community) would also have been happy to tell Maureen how the Drizzle team defines “stability”: when the community says Drizzle is stable — simple as that.

OK, so the next thing I took objection to is the following line:

Half of Rackspace’s customers are on MySQL so there’ll be some donkey-style nosing to get them to migrate.

I think my Rackspace colleagues might have quite a bit to say about the above. I haven’t seen any Rackers talking about mass migration from MySQL to Drizzle. As far as I have seen, the plan is to provide Drizzle as an additional service to Rackspace customers.

Rackspace evidently wants its new boys, who were not the core pillars of the MySQL engineering team, to hitch MySQL, er, Drizzle to Cassandra

MySQL != Drizzle. Implying that the two are equal do a disservice to both, as they have very different target markets and developer audiences.

The smart money is betting that even if a good number of high-volume web sites go down this route, an even higher number such as Facebook and Google will continue with relational databases, primarily MySQL.

Again, probably best to do your homework on this one, too. Facebook runs an amalgamation of a custom MySQL version and storage engines, distributed key-value stores, and Memcached servers. I would think that Facebook moving to Drizzle would be one tough migration. Thousands (tens of thousands?) of MySQL servers all running custom software and integrated into their caching layers is a huge barrier to entry, and not one I would expect a large site like Facebook to casually undertake. But, the same could be said about a move to SQL Server or Oracle, for that matter, and has little to do with Drizzle.

OK, so the next quote got me really fired up because it demonstrates a complete lack of understanding (maybe not Maureen’s, but the unnamed source it’s from at least):

Somebody – sorry we forget who exactly – claimed that as GPL 2 code Drizzle “severely limits revenue opportunities. For Rackspace, the opportunity to have some key Drizzle developers on its payrolls basically comes down to a promotional benefit, trying to position Rackspace as particularly Drizzle-savvy in the eyes of the community and currying favor for its seemingly generous contributions. What’s unclear is whether they may develop some Drizzle-related functionality that they will then not release as open source and just rent out to Rackspace hosting customers…that would be a way for them to differentiate themselves from competitors and GPLv2 would in principle allow this.”

A few points to make about the above quote.

First, name your source. I find it difficult to believe that the most-read technology writer would not write down a source. Is it the same person you deliberately left out of a quote from my Happiness article? (why did you do that, btw?).

Second, the MySQL server source code is licensed under the GPL 2, and so is Drizzle’s kernel, because it is a derivative work of the MySQL server.

Let me be clear: Developers who contribute code to Drizzle do so under the GPLv2 if that contribution is in the Drizzle kernel. If the code contribution is a plugin, the contributor is free to pick whatever license they choose.

Third, licensing has little if anything to do with revenue at all. The license is besides the point. There are two things which dictate the company’s revenue derivation from software:

Copyright ownership

Principles of the Company

Drizzle, Rackspace, or any company a Drizzle contributor works for, does not have the copyright ownership of the MySQL source code, from which Drizzle’s kernel is derived. Oracle does. Therefore, companies do not have any right to re-sell Drizzle (under any license) without explicit permission from Oracle. Period. Has nothing to do with the GPLv2.

That said, contributors do have the right to make money on plugins built for the Drizzle server, and Rackspace, while not having expressed any interest to yours truly in doing so, has the right like any other Drizzle contributor, to make money on plugins its contributors create for Drizzle.

It is my knowledge (after actually having talked to Rackspace managers and decision makers), that Rackspace is not interested in getting into the business of selling commercial Drizzle plugins. Their core direction is to create value for their customers, and I fail to see how getting into the commercial software sales business meets that goal.

Next time, please feel free to contact myself or any other Drizzle contributor to get the low-down on Drizzle-related stuff. We’ll be nice. I promise.

Over the past six weeks or so, I have been working on cleaning up the pluggable storage engine API in Drizzle. I’d like to describe some of this work and talk a bit about the next steps I’m taking in the coming months as we roll towards implementing Log Shipping in Drizzle.

First, how did it come about that I started working on the storage engine API?

From Commands to Transactions

Well, it really goes back to my work on Drizzle’s replication system. I had implemented a simple, fast, and extensible log which stored records of the data changes made to a server. Originally, the log was called the Command Log, because the Google Protobuffer messages it contained were called message::Commands. The API for implementing replication plugins was very simple and within a month or so of debuting the API, quite a few replication plugins had been built, including one replicating to Memcached, a prototype one replicating to Gearman, and a filtering replicator plugin.

In addition, Marcus Eriksson had created the RabbitReplication project which could replicate from Drizzle to other data stores, including Cassandra and Project Voldemort. However, Marcus did not actually implement any C/C++ plugins using the Drizzle replication API. Instead, RabbitReplication simply read the new Command Log, which due to it simply being a file full of Google Protobuffer messages, was quick and easy to read into memory using a variety of different programming languages. RabbitReplication is written in Java, and it was great to see other programming languages be able to read Drizzle’s replication log so easily. Marcus later coded up a C++ TransactionApplier plugin which replaces the Drizzle replication log and instead replicates the GPB messages directly to RabbitMQ.

And there, you’ll note that one of the plugins involved in Drizzle’s replication system is called TransactionApplier. It used to be called CommandApplier. That was because the GPB Command messages were individual row change events for the most part. However, I made a series of changes to the replication API and now the GPB messages sent through the APIs are of class message::Transaction. message::Transaction objects contain a transaction context, with information about the transaction’s start and end time, it’s transaction identifer, along with a series of message::Statement objects, each of which representing a part of the data changes that the SQL transaction made.

Thus, the Command Log now turned into the Transaction Log, and everywhere the term Command was used now was replaced with the terms Transaction and Statement (depending on whether you were talking about the entire Transaction or a piece of it). Log entries were now written at COMMIT to the Transaction Log and were not written if no COMMIT occurred1.

After finishing this work to make the transaction log write Transaction messages at commit time, I was keen to begin coding up the publisher and subscriber plugins which represent a node in the replication environment. However, Brian had asked me to delay working on other replication features and ensure that the replication API could support fully distributed transactions via the X/Open XA distributed transaction protocol. XA support had been removed from Drizzle when the MySQL binlog and original replication system was ripped out and needed some TLC. Fair enough, I said. So, off I went to work on XA.

If Only It Were Simple…

As anyone who has worked on the MySQL source code or developed storage engines for MySQL knows, working with the MySQL pluggable storage engine API is sometimes not the easiest or most straightforward thing. I think the biggest problem with the MySQL storage engine API is that, due to understandable historical reasons, it’s an API that was designed with the MyISAM and HEAP storage engines in mind. Much of the transactional pieces of the API seem to be a bolted-on afterthought and can be very confusing to work with.

As an example, Paul McCullagh, developer of the transactional storage engine PBXT, recently emailed the mysql internals mailing list asking how the storage engine could tell when a SQL statement started and ended. You would think that such a seemingly basic functionality would have a simple answer. You’d be wrong. Monty Wideniusanswered like this:

Why not simply have a counter in your transaction object for how start_stmt – reset(); When this is 0 then you know stmnt ended.

In Maria we count number of calls to external_lock() and when the sum goes to 0 we know the transaction has ended.

MySQL never kept a count of which handlers are used by a transaction, only which tables.

So the original logic was that external_lock(lock/unlock) is called for each usage of the table, which is normally more than enough information for a handler to know when a statement starts/ends.

The one case this didn’t work was in the case someone does lock tables as then external_lock is not called per statement. It was to satisfy this case that we added a call to start_stmt() for each table.

It’s of course possible to change things so that start_stmt() / end_stmt() would be called once per used handler, but this would be yet another overhead for the upper level to do which the current handlers that tracks call to external_lock() doesn’t need.

Well, in Drizzle-land, we aren’t beholden to “historic reasons” So, after looking through the in-need-of-attention transaction processing code in the kernel, I decided that I would clean up the API so that storage engines did not have to jump through hoops to notify the kernel they participate in a transaction or just to figure out when a statement and a transaction started and ended.

The resulting changes to the API are quite dramatic I think, but I’ll leave it to the storage engine developers to tell me if the changes are good or not. The following is a summary of the changes to the storage engine API that I committed in the last few weeks.

plugin::StorageEngine Split Into Subclasses

The very first thing I did was to split the enormous base plugin class for a storage engine, plugin::StorageEngine, into two other subclasses containing transactional elements. plugin::TransactionalStorageEngine is now the base class for all storage engines which implement SQL transactions:

/**
* A type of storage engine which supports SQL transactions.
*
* This class adds the SQL transactional API to the regular
* storage engine. In other words, it adds support for the
* following SQL statements:
*
* START TRANSACTION;
* COMMIT;
* ROLLBACK;
* ROLLBACK TO SAVEPOINT;
* SET SAVEPOINT;
* RELEASE SAVEPOINT;
*/class TransactionalStorageEngine :public StorageEngine
{public:
TransactionalStorageEngine(const std::string name_arg,
const std::bitset<HTON_BIT_SIZE>&flags_arg= HTON_NO_FLAGS);virtual ~TransactionalStorageEngine();
...
private:void setTransactionReadWrite(Session& session);/*
* Indicates to a storage engine the start of a
* new SQL transaction. This is called ONLY in the following
* scenarios:
*
* 1) An explicit BEGIN WORK/START TRANSACTION is called
* 2) After an explicit COMMIT AND CHAIN is called
* 3) After an explicit ROLLBACK AND RELEASE is called
* 4) When in AUTOCOMMIT mode and directly before a new
* SQL statement is started.
*/virtualint doStartTransaction(Session *session, start_transaction_option_t options){(void) session;(void) options;return0;}/**
* Implementing classes should override these to provide savepoint
* functionality.
*/virtualint doSetSavepoint(Session *session, NamedSavepoint &savepoint)=0;virtualint doRollbackToSavepoint(Session *session, NamedSavepoint &savepoint)=0;virtualint doReleaseSavepoint(Session *session, NamedSavepoint &savepoint)=0;/**
* Commits either the "statement transaction" or the "normal transaction".
*
* @param[in] The Session
* @param[in] true if it's a real commit, that makes persistent changes
* false if it's not in fact a commit but an end of the
* statement that is part of the transaction.
* @note
*
* 'normal_transaction' is also false in auto-commit mode where 'end of statement'
* and 'real commit' mean the same event.
*/virtualint doCommit(Session *session, bool normal_transaction)=0;/**
* Rolls back either the "statement transaction" or the "normal transaction".
*
* @param[in] The Session
* @param[in] true if it's a real commit, that makes persistent changes
* false if it's not in fact a commit but an end of the
* statement that is part of the transaction.
* @note
*
* 'normal_transaction' is also false in auto-commit mode where 'end of statement'
* and 'real commit' mean the same event.
*/virtualint doRollback(Session *session, bool normal_transaction)=0;virtualint doReleaseTemporaryLatches(Session *session){(void) session;return0;}virtualint doStartConsistentSnapshot(Session *session){(void) session;return0;}};

As you can see, plugin::TransactionalStorageEngine inherits from plugin::StorageEngine and extends it with a series of private pure virtual methods that implement the SQL transaction parts of a query — doCommit(), doRollback(), etc. Implementing classes simply inherit from plugin::TransactionalStorageEngine and implement their internal transaction processing in these private methods.

In addition to the SQL transaction, however, is the concept of an XA transaction, which is for distributed transaction coordination. The XA protocol is a two-phase commit protocol because it implements a PREPARE step before a COMMIT occurs. This XA API is exposed via two other classes, plugin::XaResourceManager and plugin::XaStorageEngine. plugin::XaResourceManager derived classes implement the resource manager API of the XA protocol. plugin::XaStorageEngine is a storage engine subclass which, while also implementing SQL transactions, also implements XA transactions.

Here is the plugin::XaResourceManager class:

/**
* An abstract interface class which exposes the participation
* of implementing classes in distributed transactions in the XA protocol.
*/class XaResourceManager
{public:
XaResourceManager(){}virtual ~XaResourceManager(){}
...
private:/**
* Does the COMMIT stage of the two-phase commit.
*/virtualint doXaCommit(Session *session, bool normal_transaction)=0;/**
* Does the ROLLBACK stage of the two-phase commit.
*/virtualint doXaRollback(Session *session, bool normal_transaction)=0;/**
* Does the PREPARE stage of the two-phase commit.
*/virtualint doXaPrepare(Session *session, bool normal_transaction)=0;/**
* Rolls back a transaction identified by a XID.
*/virtualint doXaRollbackXid(XID *xid)=0;/**
* Commits a transaction identified by a XID.
*/virtualint doXaCommitXid(XID *xid)=0;/**
* Notifies the transaction manager of any transactions
* which had been marked prepared but not committed at
* crash time or that have been heurtistically completed
* by the storage engine.
*
* @param[out] Reference to a vector of XIDs to add to
*
* @retval
* Returns the number of transactions left to recover
* for this engine.
*/virtualint doXaRecover(XID * append_to, size_t len)=0;};

Pretty clear. A plugin::XaStorageEngine inherits from both plugin::TransactionStorageEngine and plugin::XaResourceManager because it implements both SQL transactions and XA transactions. The InnobaseEngine is a plugin which inherits from plugin::XaStorageEngine because InnoDB supports SQL transactions as well as XA.

Explicit Statement and Transaction Boundaries

The second major change I made addressed the problem that Mark Callaghan noted in asking why finding out when a statement starts and ends was so obscure. I added two new methods to plugin::StorageEngine called doStartStatement() and doEndStatement(). The kernel now explicitly tells storage engines when a SQL statement starts and ends. This happens before any calls to Cursor::external_lock() happen, and there are no exception cases. In addition, the kernel now always tells transactional storage engines when a new SQL transaction is starting. It does this via an explicit call to plugin::TransactionalStorageEngine::doStartTransaction(). No exceptions, and yes, even for DDL operations.

What this means is that for a transactional storage engine, it no longer needs to “count the calls to Cursor::external_lock()” in order to know when a statement or transaction starts and ends. For a SQL transaction, this means that there is a clear code call path and there is no need for the storage engine to track whether the session is in AUTOCOMMIT mode or not. The kernel does all that work for the storage engine. Imagine a Session executes a single INSERT statement against an InnoDB table while in AUTOCOMMIT mode. This is what the call path looks like:

No More Need for Engine to Call trans_register_ha()

The server has no way to know that an engine participates in
the statement and a transaction has been started
in it unless the engine says so. Thus, in order to be
a part of a transaction, the engine must “register” itself.
This is done by invoking trans_register_ha() server call.
Normally the engine registers itself whenever handler::external_lock()
is called. trans_register_ha() can be invoked many times: if
an engine is already registered, the call does nothing.
In case autocommit is not set, the engine must register itself
twice — both in the statement list and in the normal transaction
list.

That comment, and I’ve read it dozens of times, always seemed strange to me. I mean, does the server really not know that an engine participates in a statement or transaction unless the engine tells it? Of course not.

So, I removed the need for a storage engine to “register itself” with the kernel. Now, the transaction manager inside the Drizzle kernel (implemented in the TransactionServices component) automatically monitors which engines are participating in an SQL transaction and the engine doesn’t need to do anything to register itself.

In addition, due to the break-up of the plugin::StorageEngine class and the XA API into plugin::XaResourceManager, Drizzle’s transaction manager can now coordinate XA transactions from plugins other than storage engines. Yep, that’s right. Any plugin which implements plugin::XaResourceManager can participate in an XA transaction and Drizzle will act as the transaction manager. What’s the first plugin that will do this? Drizzle’s transaction log. The transaction log isn’t a storage engine, but it is able to participate in an XA transaction, so it will implement plugin::XaResourceManager but not plugin::StorageEngine.

Performance Impact of Code Changes

So, that “yet another overhead” Monty talked about in the quote above? There wasn’t any noticeable impact in performance or scalability at all. So much for optimize-first coding.

What’s Next?

The next thing I’m working on is removing the notion of the “statement transaction”, which is also a historical by-product, this time because of BerkeleyDB. Gee, I’ve got a lot of work ahead of me…

[1] Actually, there is a way that a transaction that was rolled back can get written to the transaction log. For bulk operations, the server can cut a Transaction message into multiple segments, and if the SQL transaction is rolled back, a special RollbackStatement message is written to the transaction log.

A number of readers responded, and, to be fair, most everyone was “correct” in their own way. Why? Well, because the way that MySQL deals with calls to CREATE TABLE ... SELECT, CREATE TABLE IF NOT EXISTS ... SELECT and their temporary-table counterparts is completely stupid, as I learned this week. Rob Wultsch essentially sums up my feelings about the behaviour of DDL statements in regards to transactions in a session:

Implicit commit is evil and stupid. Ideally we the server should error and roll back, imho.

The Officially Correct Answer (at least in MySQL)

OK, so here’s the “official” correct answer:

CREATE TABLE IF NOT EXISTS ... SELECT does not first check for the existence of the table in question. Instead, if the table in question does exist, CREATE TABLE IF NOT EXISTS ... SELECT behaves like an INSERT INTO ... SELECT statement. Yep, you heard right. So, instead of throwing a warning when it notices that the table exists, MySQL instead attempts to insert rows from the SELECT query into the existing table.

Here is the official MySQL explanation:

For CREATE TABLE … SELECT, if IF NOT EXISTS is given and the table already exists, MySQL handles the statement as follows:
* The table definition given in the CREATE TABLE part is ignored. No error occurs, even if the definition does not match that of the existing table.
* If there is a mismatch between the number of columns in the table and the number of columns produced by the SELECT part, the selected values are assigned to the rightmost columns. For example, if the table contains n columns and the SELECT produces m columns, where m < n, the selected values are assigned to the m rightmost columns in the table. Each of the initial n – m columns is assigned its default value, either that specified explicitly in the column definition or the implicit column data type default if the definition contains no default. If the SELECT part produces too many columns (m > n), an error occurs.
* If strict SQL mode is enabled and any of these initial columns do not have an explicit default value, the statement fails with an error.

So, given the above manual explanation, the correct answer to the original blog post is:

a | b
100 | 100

partly because there is an implicit COMMIT directly before the CREATE TABLE is executed (committing the 100,100 record to the table) and the primary key violation kills off the INSERTs of 1,1 in InnoDB. For a MyISAM table, the 1,1 record would be in the table, since MyISAM has no idea what a ROLLBACK is.

I Think Drizzle Should Follow PostgreSQL’s Example Here

On implicit commits before DDL operations, I believe they should all go bye-bye. DDL should be transactional in Drizzle and if a statement cannot be executed in a transaction, it should throw an error if there is an active transaction. Period.

For behaviour of CREATE TABLE ... SELECT acting like an INSERT INTO ... SELECT, that entire code path should be ripped out.

Although a few folks knew about where I and many of the Sun Drizzle team had ended up, we’ve waited until today to “officially” tell folks what’s up. We — Monty Taylor, Eric Day, Stewart Smith, Lee Bieber, and myself — are all now “Rackers”, working at Rackspace Cloud. And yep, we’re still workin’ on Drizzle. That’s the short story. Read on for the longer one

An Interesting Almost 3 Years at MySQL

I left my previous position of Community Relations Manager at MySQL to begin working on Brian Aker‘s newfangled Drizzle project in October 2008.

Many people at MySQL still think that I abandoned MySQL when I did so. I did not. I merely had gotten frustrated with the slow pace of change in the MySQL engineering department and its resistance to transparency. Sure, over the 3 years I was at MySQL, the engineering department opened up a bit, but it was far from the ideal level of transparency I had hoped to inspire when I joined MySQL.

For almost 3 years, I had sent numerous emails to the MySQL internal email discussion lists asking the engineering and marketing departments, both headed by Zack Urlocker, to recognize the importance and necessity of major refactoring of the MySQL kernel, and the need to modularize the kernel or risk having more modular databases overtake MySQL as the key web infrastructure database. The focus was always on the short term; on keeping up with the Jones’ as far as features went, and I railed against this kind of roadmap, instead pushing the idea of breaking up the server into modules that could be blackboxed and developed independently of the kernel. My ideas were met with mostly kind responses, but nothing ever materialized as far as major refactoring efforts were concerned.

I remember Jim Winstead casually responding to one of my emails, “Congratulations, you’ve just reinvented Apache 2.0″. And, yes, Jim, that was kind of the point…

The MySQL source code base had gotten increasingly unmaintainable over the years, and key engineers were extremely resistant to changing the internals of MySQL and modernizing it. There were some good reasons for being resistant, and some poor reasons (such as “this is the way we’ve always done it”). Overall, it’s tough to question the strategy that Zack, Marten Mickos, and others had regarding the short term gains. After all, they managed to maneuver MySQL into a winning position that Sun Microsystems thought was worth one billion dollars. Because of this, it’s tough to argue with them.

Working on Drizzle since October 2008 (officially)

I’m not the kind of person which likes to wait for years to see change, and so the Drizzle project interested me because it was not concerned with backwards compatibility with MySQL, it wasn’t concerned with having a roadmap that was dependent on the whims of a few big customers, and it was very much interested in challenging the assumptions built into a 20 year-old code base. This is a project I could sink my teeth into. And I did.

Many folks have said that the only reason Drizzle is still around is because Sun continued to pay for a number of engineers to work on Drizzle as “an experiment of sorts” and that Drizzle has no customers and therefore nothing to lose and everything to gain. This was true, no doubt about it. At Sun CTO Labs, the few of us did have the ability to code on Drizzle without the pressure-cooker of product marketing and sales demands. We were lucky.

469 10 Months in Purgatory

So, around rolls April 2009. The stock market and worldwide economy had collapsed and recession was in the air. There’s one thing that is absolutely certain in recession economies: companies that have poor leadership and direction and are beholden to the interests of a large stockholder will seek an end to their misery through acquisition by a larger, stronger firm.

And Sun Microsystems was no different. JAVA stock plummeted to two dollars a share, and Jonathan Schwartz and the Sun board began shopping Sun around to the highest bidder. IBM was courted along with other tech giants. So was Oracle.

And it was with a bit of a hangover that I awoke at the MySQL conference in April 2009 to the news that Oracle had purchased Sun Microsystems. Joy. We’d just gone through 14 months of ongoing integration with Sun Microsystems and now it was going to start all over again.

Anyone who follows PlanetMySQL knows about the ensuing battle in the European Commission’s court regarding monopoly of Oracle in the database market with its acquisition of MySQL. Monty Widenius, Eben Moglen, even Richard Stallman, weighed in on the pros and cons of Oracle’s impending control over MySQL.

All the while, us Sun Microsystems employees had to hold our tongues and try to keep our jobs as Sun laid off thousands more workers while the EC battle ensued. Not fun. It was the employment equivalent of purgatory. And the time just dragged on, with many employees, including myself and the Sun Drizzle team, not having a clue as to what would happen to us. Management was completely silent about future plans. Oracle made zero attempts to outline its future strategy regarding software, and thus most software employees simply kept on doing their work not knowing if the pink slip was arriving tomorrow or not. Lots of fun that was.

Oracle Doesn’t Need Our Services — Larry Don’t Need No Stinkin’ Cloud

The acquisition finally closed and very shortly afterwards, I got a call from my boss, Lee Bieber, that Oracle wouldn’t be needing our services. Monty, Eric, and Stewart had already resigned; none of them had any desire to work for Oracle. Lee and I had decided to see what they had in mind for us. Apparently, not much.

Larry Ellison has gone on record that the whole “cloud thing” is faddish. I don’t know whether Larry understands that cloud computing and infrastructure-as-a-service, platform-as-a-service, and database-as-a-service will eventually put his beloved Oracle cash cow in its place or not. I don’t know whether Oracle is planning on embracing the cloud environments which will continue to eat up the market share of more traditional in-house environments upon which their revenue streams depend. I really don’t.

But what I do know is that Rackspace is betting that providing these services is what the future of technology will be about.

Happiness is a Warm Cloud

Our team has landed at Rackspace Cloud. I’ve now been down to San Antonio twice to meet with key individuals with whom we’ll be working closely. Rackspace is not shy about why the wanted to acquire our team. They see Drizzle as a database that will provide them an infrastructure piece that will be modular and scalable enough to meet the needs of their very diverse Cloud customers, of which there are many tens of thousands.

Rackspace recognizes that the pain points they feel with traditional MySQL cannot be solved with simple hacks and workarounds, and that to service the needs of so many customers, they will need a database server that thinks of itself as a friendly piece of their infrastructure and not the driver of its applications. Drizzle’s core principles of flexibility and focus on scalability align with the goals Rackspace Cloud has for its platform’s future.

Rackspace is also heavily invested in Cassandra, and sees integration of Drizzle and Cassandra as being a key way to add value to its platforms and therefore for its customers.

Rackspace is all about the customers, and this is a really cool thing to experience. It’s typical for companies to claim they are all about the customer — in fact, every company I’ve ever worked for has claimed this. Rackspace is the first company I’ve worked for where you actually feel this spirit, though. You can see the fanaticism of Rackers and how they view what they do always in terms of service to the customer. It’s infectious, and I’m pretty psyched to be on their team.

Anyway, that’s my story and I’m stickin’ to it. See y’all on the nets.

I thought a bit about the question and then answered the following in the “Other, please specify:” area:

Bit of a mix between all three above.

The more I think about it, the more I really do feel that Drizzle’s development process is indeed a mixture of individuals, groups, and a Benevolent dictator. And I think it works pretty well. Here’s some of the reasons why I believe our development process is effective in enabling contributions by being a mix of the above three styles.

Who’s the Benevolent Dictator of Drizzle?

First, let me get the BDFL question out of the way. We’ve made a big deal in the Drizzle community and mailing lists that anyone and everyone is encouraged to participate in the development process — so why would I say that Drizzle has a benevolent dictator?

Well, although he would probably disagree with the tile of BDFL, Brian Aker does have some dictator-like abilities with regards to the development process, and rightfully so. Brian came up with many of the concepts that Drizzle aspires to be, and Brian has more experience working on the code base than any other contributor.

After having worked closely with Brian now for 18 months or so, I can definitively say that Brian’s brain works in a very, well, interesting way. Those of us who work with him understand that sometimes his brain works so fast, his typing fingers struggle to keep up, resulting in something I call “Krowspeak”. It’s kinda funny sometimes trying to translate

With this wonderfully unique noodle, Brian tends to knock out large chunks of code at a time, and often he wants to push these chunks of code into our build and regression system and into trunk to see the results of his work quickly. Sometimes, this can cause other branches to get out of sync and get merge conflicts, and Brian will inform branch owners of the conflicts and work with them to resolve them.

So, regarding dictator-like development processes, I suppose we have Brian acting as the merge dictator because he’s got a lot of experience and understands best how both his code and other’s code integrates. We tried a little while back having myself and Monty Taylor be merge captains, but that distribution of merge work actually created a number of other problems and we’ve since gone back to Brian being the merge captain by himself, with Lee, Monty, and myself improving our automated build and regression system to help Brian with the repetitive work.

That said, what Brian does not do is make decisions in a dictator-like way. Decisions about the code style, reviews, features, syntax changes, etc are made on the mailing list by consensus vote. If a consensus is not reached, generally, no change is made which would depend on the decision. Brian does not influence the direction of the software or the source code style any more than anyone else on the mailing list which expresses an opinion about an issue; and for this, I greatly respect his wisdom to seek consensus in an open and community-oriented way.

Groups Empowered to Make Decisions

I’m assuming that what Selena’s “large/small group empowered to make decisions” answer meant was what is sometimes called “Cabal Leadership” of a project. In other words, there is some group which steers the project and makes decisions about the project which affect the rest of the project’s contributors.

Drizzle has at least one such group, the Sun Microsystems Drizzle Team, which is composed of Brian, Monty Taylor, Lee Bieber, Eric Day, Stewart Smith, and myself. One might call us the core committers for Drizzle.

However, while the Sun Drizzle team certainly is empowered to guide development, it is no different than any other group of developers that choose to contribute to Drizzle. There isn’t a “what the Sun Drizzle team decides” rule in effect. Our “power” in the development process is no greater or less than any other group of contributors. We act merely as a team of individuals who work on the Drizzle code and advocate for the project’s goals.

Individuals Empowered to Make Decisions

One thing I’ve been impressed with in the past 18 months is how the Drizzle community has embraced the opinions and work of individual contributors. I believe Toru Maesaka, Andrew Hutchings, Diego Medina and Padraig O’Sullivan were among the first individuals to begin actively contributing to Drizzle. Since then, dozens of others have joined the developer and advocate community, with each individual carving out a piece of the source code or community activities that they want to work on.

I have learned much from all these individuals over the last year or so, and I’ve tried my best to share knowledge and encourage others to do the same. Our IRC channel and mailing list are active places of discussion. Our code reviews are always completely open to the public for comments and discussed transparently on Launchpad, and this code review process has been a great mixing bowl of opinion, discussion, learning and debate. I love it.

More and more we have developers showing up and taking ownership of a bug, a blueprint, or just a part of the code that interests them. And nobody stands in their way and says “Oh, no, you shouldn’t work on that because <insert another contributor’s name> owns that code.” Instead, what you will more likely see on the lists or on IRC is a response like “hey, that’s awesome! be sure to chat with <insert another contributor’s name>. They are interested in that code, too, and you should share ideas!” This is incredibly refreshing to see.

In short, the Drizzle developer process is a nice mix of empowered individuals and groups, and a dash of dictatorship just to keep things moving efficiently. It’s open, transparent, and fun to work on Drizzle. Come join us