The ROMS algorithms were updated to include Phase I of multiple-grid nesting infra-structure (svn trunk revision 505). This is a major update of the ROMS nonlinear (NLM), tangent linear (TLM), representer (RPM), and adjoint (ADM) numerical kernels. The changes are very subtle and mostly associated with DO-loop ranges for private arrays which are used in the horizontal differentiation operators. They are critical in the implementation of nesting: refinement, composed, and mosaics grids. This is one of the most invasive changes to ROMS in years It required extensive testing of the NLM, TLM, RPM, and ADM kernels and associated algorithms, parallelism, and numerous CPP options.

The actual nesting capabilities are not available yet. This will be released in the near future . There is is still some work ahead. The changes that will be required for full nesting capabilities are very minimal but tricky and will involve changes to main2d/main3d, open boundary conditions, and additional new files for the actual nesting. We are releasing this update at this time to allow the Users time to assimilate, test, and play with the new structure. This new version has very nice capabilities that some of you have been asking for for a long time.

This version is named ROMS 3.5. Once we have the full nesting in place and finish other important developments it will become ROMS 4.0. The current plan is to release ROMS 4.0 sometime this year

Many many thank to John Warner for his help in developing and testing this capability. We have been working on this on and off for several years.

What Is New:

Added logic to split input field time records into several NetCDF files. This is useful when splitting input data (climatology, boundary, forcing) time records into several files (say monthly, annual, etc). In this case, each multiple file entry line needs to be ended by the vertical bar (|) symbol. For example:

Notice that NFFILES is 8 and not 14. There are 8 uniquely different fields in the file list, we do not count file entries followed by the vertical bar symbol. This is because multiple file entries are processed in ROMS with derived type structures.

The mutiple-files for fields need to be entered in increasing time value order. Overlap of the time coordinate is allowed. A new routine close_inp is added to close_io.F to manage backward time logic in adjoint-based algorithms.

It does not make much sense to split time-cycling data with the cycle_length attribute into several NetCDF files per field. Therefore, this logic is not currently allowed. This will also complicate the leap year logic. We recommend to split the various input fields into several NetCDF files instead of splitting the time records of a particular field when the cycle_length attribute is used Check the following post for details.

We no longer have the environmental variable NestedGrids in the makefile or the build scripts (build.bash, build.sh). The number of nested grids are now specified in ocean.in:

Code:

! Number of nested grids.

Ngrids = 1

Warning: you need to update your standard input script ocean_*.in

This required a substantial change to all of the parameters that depend on the Ngrids dimension. We need to allocate them

Notice that in order to use the build scripts from version 3.5 and up with older versions of ROMS, you need to set the following environmental variable in your login .cshrc or .tcshrc:

Code:

setenv NestedGrids 1

Recall that in the build script you can specify any version of ROMS

The input data inquiring section in routines get_ngfld.F, get_2dfld.F, and get_3dfld.F and their associated backward time logic routines get_ngfldr.F, get_2dfldr.F and get_3dfldr.F were completely re-designed to allow multiple-files per field. A new routine inquire.F was introduced to manage the inquiring and initialization of several internal variables.

A new derived-type structure, T_IO, is declared in inp_par.F to store the complete information about input and output files:

Moved the nesting ng-loop from each driver to inside the main routines main2d and main3d to facilitate nesting layers. That is, running refinement and composed grids at the same time. The call to ROMS time-stepping algoriths is just:

Notice that the argument RunInterval is the time interval, in seconds, to execute ROMS. This will become handy in multiple-model coupling configurations.

The routine get_bounds.F was substantially modified to compute the additional lower and upper bounds indices in the BOUNDS structure. The complexity of these parameters is set only here. This allows a lot of flexibility in nesting configurations including grid refinement, composite grids, and mosaic grids. This is actually one of the most important files (the brain) for ROMS nesting.

To facilitate nesting, the call to set_depth_tile was moved from step2d_LF_AM3.h to main3d.F. Similar changes were made to the associated TLM, RPM, and ADM routines.

The very old Optimal Interpolation (OI) data assimilation scheme (option OI_UPDATE) was eliminated. The files assimilation.in, mod_obs.F, and oi_update.F are now obsolete and removed. Also, several associated CPP options (ASSIMILATION_SSH, ASSIMILATION_SST, ASSIMILATION_T, ASSIMILTION_UVsur, ASSIMILATION_UV, UV_BAROCLINIC) were eliminated. ROMS nowadays has modern and powerful adjoint-based data assimilation algorithms in terms of 4D-Var and (4D-Var)T. It will be very easy to build a 3D-Var algorithm from the 4D-Var algorithm. Maybe one of these days we will do that just for completeness. The resulting 3D-Var algorithm will be more modern than the obsolete OI scheme.

Sorry for the inconvenience but Andy Moore and I made an executive decision to eliminate these old data assimilation strategies to encourage Users to learn and try new and more advanced methodologies. We still have several exciting new algorithms to release in the future .

Similarly, nudging as an assimilation scheme (options NUDGING_SST, NUDGING_T, NUDGING_UVsur, and NUDGING_UV) were also removed. These were also obsolete and inconsistent with ROMS advanced data assimilation algorithms. However, nudging as a Newtonian relaxation right-hand-side term (options M2CLM_NUDGING, M3CLM_NUDGING, and TCLM_NUDGING) is still available and useful.

Reworked and tested all the 4D-Var algorithms with the new nested grid design. All the examples for WC13 were tested to make sure that we get the same results. The spatial convolutions in the dual formulation algorithms were re-structured and moved to a subroutine to impose the error covariance:

These routines are now located in ROMS/Utility/convolve.F. This cleans the algorithms a lot and facilitates more complex algorithms in the future.

All the adjoint-based algorithms for the Generalized Stability Theory (GST) were converted to the new nesting infra-structure. This was very difficult; I spent around three months testing all the drivers. I have been thinking about how to do this for more than a year. This was one of the most challenging developments that I have done in ROMS and delayed the release of ROMS 3.5 by several months. I am very happy that I finally solved this design problem .

Technical Information:

The BOUNDS derived-type structure (declared in mod_param.F) was expanded to include several new upper and lower bounds which have different values in periodic applications, nested applications, and on tiles next to the boundaries. Special indices are required to process overlap regions (suffixes P and T) and lateral boundary values (suffixes B and M) in nested grid applications. The halo indices are used in private computations which includes ghost-points and are limited by MAX/MIN functions on the tiles next to the boundaries:

Notice that the starting (Imin, Jmin) and ending (Imax, Jmax) indices for I/O processing are 3D arrays. The first dimension (1:4) is for 1=PSI, 2=RHO, 3=u, 4=v points, the second dimension (0:1) is number of ghost points (0: no ghost points, 1: Nghost points), and the third dimension is for tile partition 0:NtileI(ng)*NtileJ(ng)-1.

The BOUNDS derived-type structure is allocated in initialize_param:

Code:

IF (.not.allocated(BOUNDS)) THEN allocate ( BOUNDS(Ngrids) ) DO ng=1,Ngrids

Most of the above indices are computed in routine var_bounds, which is located in Utilily/get_bounds.F. This is a very important routine for ROMS nesting. The nesting in ROMS is basically a clever manipulation of horizontal indices. The actual computational statements are identical to previous versions of ROMS. We just change the I- and J-ranges in the DO-loops. Pretty neat but tricky, ha? Very complex configurations (nesting layers having both composite and refinement grids) are possible by just making changes to var_bounds. For example, the complex logic for the starting I-tile indices (western edge of the tile) in var_bounds is:

Code:

!!-----------------------------------------------------------------------! Starting I-tile indices.!-----------------------------------------------------------------------! IF (DOMAIN(ng)%Western_Edge(tile)) THEN IF (EWperiodic(ng)) THEN Istr =my_Istr IstrP=my_Istr IstrR=my_Istr IstrT=IstrR IstrU=my_Istr IstrB=my_Istr IstrM=IstrU#ifdef NESTING ELSE IF (CompositeGrid(iwest,ng)) THEN Istr =my_Istr IstrP=-NghostPoints+1 IstrR=my_Istr IstrT=-NghostPoints IstrU=my_Istr IstrB=IstrT ! boundary conditions maybe avoided IstrM=IstrP ! boundary conditions maybe avoided ELSE IF (RefinedGrid(ng).and.(RefineScale(ng).gt.0)) THEN Istr =my_Istr IstrR=my_Istr-1 IstrU=my_Istr+1 IstrP=-NghostPoints+1 IstrT=-NghostPoints IstrB=IstrT ! boundary conditions are avoided IstrM=IstrP ! boundary conditions are avoided#endif ELSE Istr =my_Istr IstrP=my_Istr IstrR=my_Istr-1 IstrT=IstrR IstrU=my_Istr+1 IstrB=IstrT+1 IstrM=IstrP+1 END IF ELSE Istr =my_Istr IstrP=my_Istr IstrR=my_Istr IstrT=IstrR IstrU=my_Istr IstrB=my_Istr IstrM=IstrU END IF!! Special case, Istrm3: used when MAX(0,Istr-3) is needed.! IF (DOMAIN(ng)%Western_Edge(tile)) THEN IF (EWperiodic(ng)) THEN Istrm3=my_Istr-3#ifdef NESTING ELSE IF (CompositeGrid(iwest,ng)) THEN Istrm3=my_Istr-3 ELSE IF (RefinedGrid(ng).and.(RefineScale(ng).gt.0)) THEN Istrm3=my_Istr-3#endif ELSE Istrm3=MAX(0,my_Istr-3) END IF ELSE Istrm3=my_Istr-3 END IF!! Special case, Istrm2: used when MAX(0,Istr-2) is needed.! IF (DOMAIN(ng)%Western_Edge(tile)) THEN IF (EWperiodic(ng)) THEN Istrm2=my_Istr-2#ifdef NESTING ELSE IF (CompositeGrid(iwest,ng)) THEN Istrm2=my_Istr-2 ELSE IF (RefinedGrid(ng).and.(RefineScale(ng).gt.0)) THEN Istrm2=my_Istr-2#endif ELSE Istrm2=MAX(0,my_Istr-2) END IF ELSE Istrm2=my_Istr-2 END IF!! Special case, IstrUm2: used when MAX(1,IstrU-2) is needed.! IF (DOMAIN(ng)%Western_Edge(tile)) THEN IF (EWperiodic(ng)) THEN IstrUm2=IstrU-2#ifdef NESTING ELSE IF (CompositeGrid(iwest,ng)) THEN IstrUm2=IstrU-2 ELSE IF (RefinedGrid(ng).and.(RefineScale(ng).gt.0)) THEN IstrUm2=IstrU-2#endif ELSE IstrUm2=MAX(1,IstrU-2) END IF ELSE IstrUm2=IstrU-2 END IF!! Special case, Istrm1: used when MAX(1,Istr-1) is needed.! IF (DOMAIN(ng)%Western_Edge(tile)) THEN IF (EWperiodic(ng)) THEN Istrm1=my_Istr-1#ifdef NESTING ELSE IF (CompositeGrid(iwest,ng)) THEN Istrm1=my_Istr-1 ELSE IF (RefinedGrid(ng).and.(RefineScale(ng).gt.0)) THEN Istrm1=my_Istr-1#endif ELSE Istrm1=MAX(1,my_Istr-1) END IF ELSE Istrm1=my_Istr-1 END IF!! Special case, IstrUm1: used when MAX(2,IstrU-1) is needed.! IF (DOMAIN(ng)%Western_Edge(tile)) THEN IF (EWperiodic(ng)) THEN IstrUm1=IstrU-1#ifdef NESTING ELSE IF (CompositeGrid(iwest,ng)) THEN IstrUm1=IstrU-1 ELSE IF (RefinedGrid(ng).and.(RefineScale(ng).gt.0)) THEN IstrUm1=IstrU-1#endif ELSE IstrUm1=MAX(2,IstrU-1) END IF ELSE IstrUm1=IstrU-1 END IF

That is, we just need to make changes in this routine and set complex configurations without the need to change ROMS kernel

Notice that the extended bounds (labeled by suffix R) are designed to cover the outer grid points if the subdomain tile is adjacent to the physical boundary (outlined above with +). Notice that IstrR, IendR, JstrR, and JendR tile bounds computed here do not cover ghost points associated with periodic boundaries (if any) or the computational margins of MPI subdomains.

A good place to check how the new indices are used is to comparestep2d_LF_AM3.h side by side with an older version of these routine. Cick on Last Change in previous link from trac.

The C-preprocessing logical expressions to operate on tiles next to the domain boundaries and corners are replaced with actual logical variables to facilitate nesting configurations. These new logical variables are in the derived-type DOMAIN(ng) structure declared in mod_param.F and initialized in routine get_domain_edges located in get_bounds.F. The change is as follows:

To avoid recomputations in overlap contact areas during nesting, the RHO-point range in several routines was changed from:

Code:

DO j=JstrR,JendR DO i=IstrR,IendR ... END DO END DO

to

Code:

DO j=JstrT,JendT DO i=IstrT,IendT ... END DO END DO

To Be Resolved:

There is a shared-memory and serial with partitions parallel bug when activating TS_MPDATA and NS_PERIODIC. The bug doesn't appear right away but after several time steps dependent on the application (GRAV_ADJ or RIVERPLUME1). I have been hunting this bug for awhile. It is likely that the bug is in mpdata_adiff.F and goes away if Wa=0.0. It seems that we have some roundoff issues in this routine.

There is a parallel bug in DIAGNOSTICS_TS and TS_MPDATA for variable salt_xadv. It is in the original and new versions of step3d_t.F and mpdata_adiff.F. It was noticed in the UPWELLING test case which keeps salinity constant. This is weird because salinity is constant and has no evolution from the physics, SCOEF = 0.0. There is a roundoff problem here. I don't know if this is related to the bug in TS_MPDATA for N-S periodicity.

Need to resolve the issue of nudging to climatology in step3d_t.F when using refined grids. This needs to be done only in the parent grid, ng=1.

There is a shared-memory and serial with partitions bug in application WC13. It is related to land/sea masking and the partition being close to land:

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum