MPAS-Atmosphere Meshes

Several resolutions of quasi-uniform meshes, plus one refined mesh, are
available for download. Each download provides an SCVT mesh on the unit
sphere, the mesh connectivity (graph.info) file for the mesh, and
partitionings of the mesh (e.g., graph.info.part.32) for various MPI
task counts. Other meshes may be available upon request from the
MPAS-Atmosphere developers by sending mail to
mpas-atmosphere-helpATgooglegroups.com.

All mesh files supplied here use double-precision real values.
However, running MPAS-Atmosphere in single-precision requires the user to begin with
single-precision SCVT mesh files, and all pre-processing steps must be run using a
single-precision version of init_atmosphere_model with these mesh files.
The double-precision mesh files provided here may be run through the
double_to_float_grid
converter program to produce a single-precision mesh file.

60-km mesh (163842 horizontal grid cells)

30-km mesh (655362 horizontal grid cells)

15-km mesh (2621442 horizontal grid cells)

10-km mesh (5898242 horizontal grid cells)

Important note: The 3-d fields that exist in the model at higher resolution can easily exceed the
4 GB limit imposed by the classic netCDF format. When creating atmospheric initial conditions (i.e., the "init.nc" file),
and when writing output streams from the model with 3-d fields, it is necessary to use an "io_type" that supports
large variables, such as "pnetcdf,cdf5" or "netcdf4". For more information on selecting the "io_type" of a stream,
refer to Chapter 5 in the Users' Guide.

Additionally, depending on the amount of memory available, it may be necessary to interpolate the static,
geographical fields in parallel. In the v5.0 release, doing this properly requires the use of a special, convex mesh
decomposition file -- a regular METIS partition file will not give correct results! In order to process static fields
in parallel, one must:

(1) comment-out the code between lines 199 and 204 in src/core_init_atmosphere/mpas_init_atm_cases.F that ordinarily prevents the parallel processing of static fields; and

(2) ensure that the "cvt" partition file prefix (e.g., x1.5898242.cvt.part.) is specified in the config_block_decomp_file_prefix
variable in namelist.init_atmosphere.

Variable-resolution meshes

All variable-resolution meshes are supplied with a single refinement
region that is centered on 0.0° latitude, 0.0° longitude. This region
of refinement may be relocated to other locations on the sphere using the grid_rotate
utility, whose usage is described in the MPAS-Atmosphere Users' Guide.
The grid_rotate utility may be downloaded here.

60-km – 15-km mesh

15-km – 3-km mesh

This mesh contains 6488066 horizontal grid cells, with the circular refinement
region spanning approximately 60 degrees of latitude/longitude. When using this mesh, there are several
crucial changes that must be made to pre-processing steps and to the model configuration:

(1) The 3-d fields that exist in the model at higher resolution can easily exceed the
4 GB limit imposed by the classic netCDF format. When creating atmospheric initial conditions (i.e., the "init.nc" file),
and when writing output streams from the model with 3-d fields, it is necessary to use an "io_type" that supports
large variables, such as "pnetcdf,cdf5" or "netcdf4". For more information on selecting the "io_type" of a stream,
refer to Chapter 5 in the Users' Guide.

(2) Depending on the amount of memory available, it may be necessary to interpolate the static,
geographical fields in parallel. In the v5.0 release, doing this properly requires the use of a special, convex mesh
decomposition file -- a regular METIS partition file will not give correct results! In order to process static fields
in parallel, one must:

(a) comment-out the code between lines 199 and 204 in src/core_init_atmosphere/mpas_init_atm_cases.F that ordinarily prevents the parallel processing of static fields; and

(b) ensure that the "cvt" partition file prefix (e.g., x5.6488066.cvt.part.) is specified in the config_block_decomp_file_prefix
variable in namelist.init_atmosphere.

* Note also that when 'config_native_gwd_static = true', each MPI task will read roughly 4 GB of global terrain data, so care must
be taken to use enough compute nodes to avoid running out of memory.

(3) When running the model itself with the 15-km – 3-km mesh, it is necessary to set config_apvm_upwinding = 0.0 in the nhyd_model namelist record.