Turn off autocommit and just do one commit at the end. (In
plain SQL, this means issuing BEGIN at
the start and COMMIT at the end. Some
client libraries may do this behind your back, in which case
you need to make sure the library does it when you want it
done.) If you allow each insertion to be committed separately,
PostgreSQL is doing a lot of
work for each row that is added. An additional benefit of doing
all insertions in one transaction is that if the insertion of
one row were to fail then the insertion of all rows inserted up
to that point would be rolled back, so you won't be stuck with
partially loaded data.

Use COPY to load all
the rows in one command, instead of using a series of
INSERT commands. The COPY command is optimized for loading large
numbers of rows; it is less flexible than INSERT, but incurs significantly less overhead
for large data loads. Since COPY is a
single command, there is no need to disable autocommit if you
use this method to populate a table.

If you cannot use COPY, it may help
to use PREPARE to
create a prepared INSERT statement,
and then use EXECUTE as many times as
required. This avoids some of the overhead of repeatedly
parsing and planning INSERT.

Note that loading a large number of rows using COPY is almost always faster than using
INSERT, even if PREPARE is used and multiple insertions are
batched into a single transaction.

If you are loading a freshly created table, the fastest way
is to create the table, bulk load the table's data using
COPY, then create any indexes needed
for the table. Creating an index on pre-existing data is
quicker than updating it incrementally as each row is
loaded.

If you are augmenting an existing table, you can drop the
index, load the table, and then recreate the index. Of course,
the database performance for other users may be adversely
affected during the time that the index is missing. One should
also think twice before dropping unique indexes, since the
error checking afforded by the unique constraint will be lost
while the index is missing.

Temporarily increasing the maintenance_work_mem
configuration variable when loading large amounts of data can
lead to improved performance. This is because when a B-tree
index is created from scratch, the existing content of the
table needs to be sorted. Allowing the merge sort to use more
memory means that fewer merge passes will be required. A larger
setting for maintenance_work_mem may
also speed up validation of foreign-key constraints.

Temporarily increasing the checkpoint_segments
configuration variable can also make large data loads faster.
This is because loading a large amount of data into
PostgreSQL can cause
checkpoints to occur more often than the normal checkpoint
frequency (specified by the checkpoint_timeout configuration variable).
Whenever a checkpoint occurs, all dirty pages must be flushed
to disk. By increasing checkpoint_segments temporarily during bulk data
loads, the number of checkpoints that are required can be
reduced.

Whenever you have significantly altered the distribution of
data within a table, running ANALYZE is strongly recommended. This
includes bulk loading large amounts of data into the table.
Running ANALYZE (or VACUUM ANALYZE) ensures that the planner has
up-to-date statistics about the table. With no statistics or
obsolete statistics, the planner may make poor decisions during
query planning, leading to poor performance on any tables with
inaccurate or nonexistent statistics.