Recent Articles

Migrating Sakila from MySQL to PostgreSQL

As presented at the
PostgreSQL Conference Europe the new version of
pgloader
is now able to fully migrate a MySQL database, including discovering the
schema, casting data types, transforming data and default values.
Sakila is
the traditional MySQL example database, in this article we're going to fully
migrate it over to PostgreSQL.

What about switching to PostgreSQL, it's easier than ever.

Without further ado, here's what happens when you ask
pgloader to please
migrate the whole thing over to
PostgreSQL:

In those
2 and a half seconds, the whole dataset has been converted. Note
that the indexes are being built in parallel with the data loading, and all
indexes against the same relation are built in parallel to each other,
too.

Here's the
pgloader command that we used:

LOAD DATABASE
FROM mysql://root@localhost/sakila
INTO postgresql://localhost:54393/sakila
WITH include drop, create tables, no truncate,
create indexes, reset sequences, foreign keys
SET maintenance_work_mem to '128MB', work_mem to '12MB',
search_path to 'sakila' /* migrate to a specific schema */
CAST type datetime to timestamptz
drop default drop not null using zero-dates-to-null,
type date drop not null drop default using zero-dates-to-null,
type year to integer
BEFORE LOAD DO
$$ create schema if not exists sakila; $$;

Here's an example of how the casting rules work in that very case, where
we've been using mostly default rules:

We can see that we're using a couple for
custom data types in PostgreSQL,
those are the conversion from the
ENUM and
SET datatypes that MySQL database
is using here. The
SET datatype is simply converted to an array of ENUM
values in PostgreSQL.

At this point, you're left with reviewing the queries in your code and
adapting those. Also unhandled, the triggers and stored procedures and
views.