The Datomic transactor handles concurrency by transacting datoms
serially, but that doesn't mean it isn't fast! In my experience, the bottleneck is actually in the
reshaping of data and formatting transactions. I
use core.async to parallelize just about everything in the import
pipeline.

I use DynamoDB as my storage backend in production. I used to try to
run my import tasks directly to the production transactor/storage.
Lately, though, I've found it really helpful to run my import tasks to
a locally-running transactor and the dev storage backend.

Running an import locally means I don't have to worry about networking, which speeds the whole process up quite a bit; also, it give me a much more freedom to iterate on the database
design itself. (I rarely get an import correct the first time.) And
in the case of DynamoDB, I save some money, as I don't have to have my
"write throughput" cranked way up for as long.

Once everything looks good on the local production database, I use
Datomic's builtin backup/restore facilities to send the database
up to production. Assuming you've already deployed a production transactor and provisioned DynamoDB storage, here's the process I follow:

Run the datomic backup-db command against the local import.

Crank my "write throughput" on DynamoDB way up (on the order of 1000).

Run the datomic restore-db command from the backup folder to the
remote database.

Turn the "write throughput" back down to whatever
value I plan to use for ongoing use (see the Datomic
documentation for more
information).

The heart of almost every business is its data. Datomic is a great
choice for business data, in part because it treats all data as
important: nothing is overwritten. New things are learned, but the old
facts are not replaced. And knowing how to get your data into Datomic
is half the battle.