HDF5 Backend

HDF5 backend, is the first storage backend that has been implemented in Damaris. It allows simulation developers to write simulation results into HDF5 data format asynchronously. HDF5 backend, can be used in two different modes in Damaris, that are:

File-Per-Dedicated-Core: In this mode, all the simulation results in each node are aggregated by dedicated cores and stored asynchronously at the end of each iteration.

Collective I/O: In this mode, all simulation results are written into one single file for each iteration using Parallel HDF5. Although having one single file makes the data post-processing more convenient, but synchronizing all the processes to write on a single file is costly.

Below, you can find more about how to configure Damaris to use HDF5 backend.

Configuration

To start using this backend, you should first compile Damaris with HDF5 support. To do it, you should first initialize the HDF5_ROOT variable in the CMakeList.txt file before compiling like this:

While the compilation of Damaris with HDF5 is successful, you should define your storage at the XML configuration file. To define your storage, you should use the <storage> tag at the root of your XML file. An example <storage> tag is shown here:

Inside the <storage> tag, you should define another tag, called <store>. This tag and all of its attributes define a storage backend, e.g. HDF5 backend. For each <store> tag you should define two important attributes:

Name: That is an arbitrary name that you should put for your store.

Type: For HDF5 backend, it should be HDF5. In future and for defining new storage backend, you should put proper names here.

Each storage backend may have its own parameters to configure the underlying storage technology. For HDF5, these options are available currently:

FileMode: That either can be set to FilePerCore or Collective. As described earlier, in the former case each dedicated core writes to its own file, but in the later all the dedicated cores are writing into a single HDF5 file at the end of each iteration.

XDMFMode: Not integrated to this version yet. But setting this option to NoIteration, FirstIteration or LastIteration configure Damaris in such a way that build the XDMF file on different times during running of the simulation.

FilesPath: represents the path in which the result HDF5 files should be stored.

After defining a <store> tag in the XML file, you need to determine that a variable of your simulation is stored using this store type. For example this variable space in the below definition is configured to be stored in the MyStore storage.

In the case that the collective mode is selected, for each variable, in addition to the local dimensions of the data, the global dimensions should be determined as well. As an example, the below layout:

shows that although a 16×16 grid is going to be processed by simulation, but each process is working with a smaller dataset. The height of this local dataset is equal to 16+5 (5, i.e. 2+3, is the total length of ghost zones in both directions) and its width is equal to to 16/size+3 (3, i.e. 1+2 is the total length of ghost zones). As mentioned earlier, size is the number of clients. To see more examples of HDF5 storage, you can check this page.

File Names

There are some points about storing simulation variables in HDF5 files:

At the end of each iteration, the value of variables is written in separate HDF5 files. If the HDF5 backend is configured to store collectively, one file is created for each iteration. In the file-per-dedicated-core mode, one file is created for each dedicated core at the end of each iteration.

If a variable is empty in some iterations, the variable is not written to the files. For example, the coordinate variables are usually written at the first iteration. In this case, these variables are appeared only in the files of the first iteration.

If no data is written in an iteration, no file will be created for that iteration.

In addition, for naming the created files, Damaris HDF5 backend uses the simulation name (defined in the name attribute of simulation tag) as the base and adds some postfixes at the end of the file. Here are some examples:

In the collective mode, only one file is created at the end of each iteration, so the name of the file will be simulation_ItXX.h5, wherein the simulation is the name of the simulation defined in the XML file and XX is the iteration number kept by Damaris. For example, multi-physics_It550.h5 is the name of a file for a simulation, namely multi-physics and on its 55th iteration.

In the file-per-dedicated-core mode, one file will be created at the end of each iteration for each dedicated core. The files are created with a pattern like simulation_ItXX_PrYY.h5. Like the collective case, simulation and XX represent the simulation name and the iteration number. In addition, YY represents the rank of the dedicated process that has written its own variables into the file.

File Structure

According to the configuration of the XML file, the contents of the HDF5 files may vary. Here you can find three different structures of HDF5 files. For the examples below, we suppose that a 16×16 mesh is processes by 8 clients and 4 dedicated cores, totally using 12 processes running on a node.

In the collective I/O case, the HDF5 file will contain a dataspace, as large as the dimension defined in the layout of the variable (In the global attribute as described above) and with the same name of the simulation name. So, the dataspace of the mentioned example will be something like this:

In the file-per-dedicated-core mode with one domain per process (domain = 1), the HDF5 will contain a group with the name of the stored variable and then a set of groups with the name PZ under the variable group. In this case, Z represents the client rank that has sent this block of memory to the dedicated core. As an example the dataspace for one of the created files could be like this:

In the file-per-dedicated-core mode with more than one domain per process (domain > 1), the structure of the HDF5 will be the same as the previous case. The only difference is that in this case, under each PZ group, a new group, namely BT is created in which B stands for block and T represents the block number. For example, if the clients write data twice in each iteration (domain =2), then the file structure will be like this:

VDS Support

HDF5 version 1.10.1 comes with a new feature called Virtual Dataset (VDS). This feature enables HDF5 developers to access and work with data stored in a collection of HDF5 files (like the ones stored by different cores in the case of Damaris) as if the data is stored in a single .h5 file. This feature is not supported in Damaris yet, because the .vds files cannot be created collectively in the current version of HDF5 (1.10.1). But, as soon as the issue is fixed, Damaris will support VDS files in the file-per-dedicated-core case. This means that using this VDS file, the whole files created by dedicated cores could be accessed like a single HDF5 file created in the collective mode.