You may have read a bunch of short articles with the same name on the web, but they were just snippets of information you needed. It's time to put it all together.

You have a project in MySQL and suddenly you find out that you need to switch to PostgreSQL. Suddenly you see that there are many flavours of SQL and that your seemingly basic constructions throw a lot of errors. You don't have time to really rewrite your code from scratch, it may come later...

With PostgreSQL you may still feel a little like a second-class citizen, but not really the ignored one. There are some major projects like Asterisk, Horde or DBMail that have recognized its qualities and although MySQL was their first choice database, they are showing effort to make things run here too.

Most likely you don't need this chapter, but very briefly: after you've installed your package with PostgreSQL on your Linux machine (be it from a package or following these notes), you need to do something like

Have a look at http://pgloader.io and you can migrate your MySQL database over to PostgreSQL in a single command:

pgloader mysql://user@localhost/dbname postgresql:///dbname

This will handle type casting with a default casting rules set, and also schema discovery in MySQL and creation in PostgreSQL, including tables, columns, constraints (primary keys, foreign keys, NOT NULL), default values, and secondary indexes. The data are transformed on the fly to be accepted by PostgreSQL, which includes getting rid of zero-dates (there's no year zero in our calendar, neither month nor day zero, and while MySQL doesn't care about that PostgreSQL is quite strongly opinionated that if you use year zero then what you're dealing with is not a date).

For more advanced options or if you want to change the default settings pgloader MySQL support[1] allows you to write a full command using its own language with different rules to describe how you want your migration done.

but even then you will have to change escaped chars (replacing \t with ^I, \n with ^M, single quote (') with doubled single quote and double (escaped) backslash (\\) with a single backslash). This can't be trivially done with sed command, you may need to write a script for it (Ruby, Perl, etc). There is a MySQL to PostgreSQL python convert script (you need to use --default-character-set=utf8 when exporting your mysqldump to make it work).
It is much better and proven solution to prepend your dump with the following lines

SET standard_conforming_strings = 'off';
SET backslash_quote = 'on';

These options will force PostgreSQL parser to accept non-ANSI-SQL-compatible escape sequences (Postgre will still issue HINTs on it; you can safely ignore them). Do not set these options globally: this may compromise security of the server!

You also have to manually modify the data types etc. as discussed later.

After you convert your tables, import them the same way you were used to in MySQL, that is

When you have a large sql dump containing binary data, it will not be easy to modify the data structure, so there is another way to export your data to PostgreSQL.
Mysql have an option to export each table from the database as a separate .sql file with table structure and .txt file with table's data in CSV-format:

When table structure will be ready, you should load it as it was shown above.
You should prepare data files: replace carriage return characters to "\r" and remove invalid characters for your data encoding.
Here is an example bash script how you can do this and load all the data in your database:

MySQL uses ' or " to quote values (i.e. WHERE name = "John"). This is not the ANSI standard for databases. PostgreSQL uses only single quotes for this (i.e. WHERE name = 'John'). Double quotes are used to quote system identifiers; field names, table names, etc. (i.e. WHERE "last name" = 'Smith'). MySQL uses ` (accent mark or backtick) to quote system identifiers, which is decidedly non-standard. Note: you can make MySQL interpret quotes like PostgreSQL using SET sql_mode='ANSI_QUOTES'.

... WHERE lastname="smith"

... WHERE lower(lastname)='smith'

PostgreSQL is case-sensitive for string comparisons. The value 'Smith' is not the same as 'smith'. This is a big change for many users from MySQL (in MySQL, VARCHAR and TEXT columns are case-insensitive unless the "binary" flag is set) and other small database systems, like Microsoft Access. In PostgreSQL, you can either:

Use the correct case in your query. (i.e. WHERE lastname='Smith')

Use a conversion function, like lower() to search. (i.e. WHERE lower(lastname)='smith')

Use a case-insensitive operator, like ILIKE or *~

`LastName` = `lastname`

and maybe not?

"LastName" <> "lastname"

Database, table, field and columns names in PostgreSQL are case-independent, unless you created them with double-quotes around their name, in which case they are case-sensitive. In MySQL, table names can be case-sensitive or not, depending on which operating system you are using.Note that PostgreSQL actively converts all non-quoted names to lower case and so returns lower case in query results!

SERIAL is in fact an entity named SEQUENCE. It exists independently on the rest of your table. If you want to cleanup your system after dropping a table, you also have to DROP SEQUENCE name. More on that topic...

Note for MySQL:

column SERIAL PRIMARY KEY

or

column SERIAL,
PRIMARY KEY(column)

Will result in having 2 indexes for column.
One will be generated by the PRIMARY KEY constraint, and one by the implicit UNIQUE constraint present in the SERIAL alias.
This has been reported as a bug and might be corrected.

(Note: MySQL REPLACE INTO deletes the old row and inserts the new, instead of updating in-place.)

SELECT ... INTO OUTFILE '/var/tmp/outfile'

COPY ( SELECT ... ) TO '/var/tmp/outfile'

SHOW DATABASES

Run psql with -l parameter

or using psql:

\l

or

SELECT datname AS Database FROM pg_database
WHERE datistemplate = 'f'

PostgreSQL doesn't implement an SQL extension.

SHOW TABLES

Using psql:

\dt

or

SELECT c.relname AS Tables_in FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE pg_catalog.pg_table_is_visible(c.oid)
AND c.relkind = 'r'
AND relname NOT LIKE 'pg_%'
ORDER BY 1

NOTE: it is not only "subsitute string" solution as you need to know the name of SERIAL variable (unlike AUTO_INCREMENT in MySQL). Also note that PostgreSQL can play with the OID of the last row inserted by the most recent SQL command.

NOTE2: Even better way to replace LAST_INSERT_ID() is creating a rule, because this cannot suffer from race-conditions:

CREATE RULE get_{table}_id_seq AS ON INSERT TO {table} DO SELECT currval('{table}_id_seq'::text) AS id;

(usage is somehow strange, you get a result from an INSERT-statement, but it works very well)

ERROR: relation "something" does not exist - usually table doesn't exist as you probably didn't make it with the new datatypes or syntax. Also watch out for case folding issues; PostgreSQL = postgresql != "PostgreSQL".