Standalone code

The build process in standalone uses the SFIRE source code from the WRF file tree, namely the files WRFV3/phys/module_fr_sfire_*.F except WRFV3/phys/module_fr_sfire_driver_wrf.F. There is no code duplication. Instead of WRF, the fire code is linked with the *.F files in the standalone directory, which provide I/O and substitute the required subset of WRF functionality.

Interface between WRF and SFIRE

The defined interface to SFIRE is between WRFV3/phys/module_fr_sfire_driver_wrf.F and subroutine sfire_driver_em in WRFV3/phys/module_fr_sfire_driver.F

WRF calls sfire_driver_em once at initialization, and then (with slightly different arguments) in every time step.

The arguments of sfire_driver_em consist of two structures (called derived types in Fortran), grid, which contains all state, input, and output variables, and config_flags, with all variables read from from file namelist.input, and some array dimensions.

The standalone code defines its own grid derived type with only the subset of the fields needed. All fields in grid are set in the standalone driver main file, standalone/model_test_main.F, and nothing is hidden elsewhere.

The standalone code replicates config_flags from WRF using include files named *.inc in the standalone directory. These include files are copies of the files generated by the WRF build process in WRFV3/inc. They are not soft links so that one can build the standalone code without building WRF. The inc files may need to be updated when the description of the configuration flags in the WRF registry changes.

SFIRE architecture

WRF divides the horizontal domain into patches and divides the patches into tiles. Each patch executes in one MPI process. Each tile is updated by one OpenMP thread.

The SFIRE code is capable of parallel execution in shared memory. Division into tiles is controlled by fields in grid. There is only one OpenMP parallel loop over the tiles, in WRFV3/phys/module_fr_sfire_driver.F. The rest of the SFIRE code executes on a single tile, starting from WRFV3/phys/module_fr_sfire_model.F Because the tiles need to access values from neighboring tiles at several points in the computation, within a single invocation of SFIRE, the parallel loop is executed several times to synchronize the data in memory at the exit from the loop. Each execution of the parallel loop performs a different stage of the computation.

When SFIRE runs in distributed memory, the communication between the patches is done in includes in WRFV3/phys/module_fr_sfire_driver.F (search for HALO). This has no effect in the standalone code; in WRF, the includes are provided by the WRF parallel infrastructure, RSL-Lite. If you want to run SFIRE in MPI, you need to provide equivalent HALO includes yourself.

SFIRE does not keep any state except scalar flags and fixed-size tables, set at initialization. All adjustable-sized arrays preserved beween calls are in grid.

Required testing

If you change anything in the files WRFV3/phys/module_fr_sfire_*.F, you must test that the changes do not break WRF-Fire, otherwise your changes will not be maintainable.

Build WRF-Fire with SM+DM parallelism with debugging, and test all 4 versions (serial, SM, DM, SM+DM) with the examples provided in WRFV3/test/em_fire and various numbers of processors. The results (the numerical values in the arrays the wrfrst files, not the files themselves) must be bit-identical to each other and identical to what they were before your changes. You can compare the files using the diffwrf utility, which is built as a part of the WRF compilation process, or start Matlab in WRFV3/test/em_fire (to set the path) and use the command ncdiff in Matlab.

Build WRF-Fire with optimization and test on several platforms (at least gfortran and PGI).

Web-based run system WRFX

wrfxpy - Initiate and manage WRF-SFIRE simulations on a cluster. Automates data download, queuing and monitoring jobs, and postprocessing outputs into geolocated images. (Python). Intended to run on a head node of the cluster. Documentation is available at http://wrfxpy.readthedocs.io

wrfxctrl - Map-based web interface to wrfxpy (Python and Javascript). Intended to run on a head node of the cluster. Runs its own web server.

wrfxweb - Map-based visualization server running on a remote server. The visualizations are also available as Google Earth KML files. (Browser-based Javascript, Python utilities) Designed to run on a small cloud machine.