I'm kind of new to writing unlimited dimensions in netcdf and ran into
some odd behavior when attempting to write unlimited dimension with
netCDF4 and Fortran90. In short, I find two problems (for details of the
problems see below examples and code):
1) writing unlimited dimensions in netCDF4 significantly increases file
size over same data written with version 3.6. I realize I could turn on
deflate ... but why would netcdf4-unlimited be 10x bigger than the
ver.3.6 or ver.4??

The file size is much greater for tiny files, but not that big for
larger files.
The netCDF-4 format is HDF5, which has much more header information
than netCDF-3 classic format files. So a very small file (i.e. one
without much data) will be bigger in HDF5 due to the increased header
information.
But once you start writing data to the file, the importance of the
header is less, and when you are writing reasonably sized data files
there should not be too much difference.
For example, with a 600 x 1200 array of ints, I get the following
sizes:
-rw-r--r-- 1 ed ustaff 2880104 Aug 1 13:59 tst_unlims_3.nc
-rw-r--r-- 1 ed ustaff 2917551 Aug 1 13:59 tst_unlims_4.nc

2) writing large amounts of unlimited data seems impossible? Is there a
performance issue? I realize my code is likely not optimized. Maybe I'm
using the wrong netCDF flags? When attempting to write 1 million points
with an unlimited dimension, ver3 file (not shown) completes quickly,
ver.4 file never completed.

Hmmm, I am having the same problem. I will look into this some more
and get back to you.
Thanks,
Ed

Ed:

For what it's worth, I'm also seeing very poor performance writing large
chunks of data along the unlimited dimension in NETCDF4 and
NETCDF4_CLASSIC (using the python interface, which is just a wrapper
around the C interface). Works fine if the format is NETCDF3 though.