Re: overcoming netcdf3 limits

As a quick answer to the question, we (Sandia Labs) use netcdf
underneath our exodusII
file format for storing finite element results data.
If the mesh contains #nodes nodes and #elements elements, then there
will be a dataset of the size #elements*8*4 (assuming a hex element with
8 nodes, 4 bytes/int) to store the nodal connectivity of each hex
element in a group of elements (element block). Assuming 4GiB, this
limits us to ~134 Million elements per element block (using CDF-2) which
is large, but not enough to give us more than a few months breathing
room. Using CDF-1 format, we top out at about 30 million elements or
less which is hit routinely.

Im not sure if I understand the problem yet:
In the file you sent me, you use time as the record variable.
Each record variable must be less than 2^32, not counting the record dimension.
So you can have about 2^29 elements, assuming each element is 8 bytes. And you
can have 2^32 time steps.
The non-record variables are dimensioned (num_elements, num_nodes). Number of
nodes seems to be 8, and these are ints, so you have 32 bytes * num_elements,
so you can have a max of 2^27 elements = 134 million elements. Currently the
largest you have is 33000. Do you need more than 2^27 ?

I think Im not sure what you mean by "a few months breathing room".

There is a pdf file at
http://endo.sandia.gov/SEACAS/Documentation/exodusII.pdf that shows
(starting at page 177) how we map exodusII onto netcdf. There have been
some changes since the report was written to reduce some of the dataset
sizes. For example, we split the "coord" dataset into 3 separate
datasets now and we also split the vals_nod_var into a single dataset
per nodal variable.
--Greg
John Caron wrote:

Hi Rob:
Could you give use case(s) where the limits are being hit?
I'd be interested in actual dimension sizes, number of variables,
whether you are using a record dimension, etc.
Robert Latham wrote: