We have a task of complete rewrite of an old system and as you might guess, one of the challenges awaiting us is database migration. This means double challenge, since we’re not only migrating to new, heavily changed schema, but also from Mnesia to PostgreSQL.

Unfortunately the legacy node has been running Mnesia for ages without any node name, so at first the only easy approach I could think of was using mnesia:dump_to_textfile/1 but I’m afraid it is broken in some way or existing DB is really crappy because I couldn’t get it to export all data. 🙂

OK, so perhaps load Mnesia data to named node, setup Mnesia cluster and then migrate data? Nope, tables are displayed as “remote” and I couldn’t find the way to modify node name in Mnesia schema files.

You might consider it a messy solution but I decided to first start noname node and then use this nice command:

net_kernel:start(['newname@host', shortnames]).

And there you go! Now you have named node, with which you can connect in Erlang way. 🙂 Sadly this breaks Mnesia, so we are going to exploit the fact, that non-disk_copy_only tables are kept in ets. With RPC call from node doing the migration we can get all the data and save it in PostgreSQL:

rpc:call('newname@host', ets, tab2list, [table])

Of course it probably won’t work too well with high amounts of data but in our case the largest table has less than 1000 records, so I don’t worry about performance here.

BTW: I haven’t tested migration script yet. 🙂 RPC call works though and returns correct set of data from legacy node. 🙂