The CABARET method can be used on structured and unstructured grids. To deal with high-fidelity simulations requiring the use of unstructured grids, a version of the CABARET code has been developed. This is capable of dealing with an arbitrary grid structure and will be the code that is used for this project. At the start of this project, the CABARET code was mainly used on single core desktop systems.

We shall describe the development of a distributed memory version of CABARET for use on the HECToR XT/XE systems and based on an unstructured grid representation for compressible turbulent flows. It is worthwhile noting that relationships between neighbouring cells and partitions are generally more complex to manage here than with a structured grid. The consequence is that data associated with unstructured grid layouts requires a lot of effort so that it can be managed and updated by the methods which determine new values based on other values in their physical proximity.

The original work plan for this project was scheduled as follows:

Develop an automatic geometrical domain decomposition for parallel processing, which should balance the loads efficiently in relation to the CABARET hexahedral grid structure. This will be implemented by a pre-process stage of the data from the Gambit mesh generator facilitating use of either METIS or Scotch. Produce an internal report on the best partitioning method and demonstrate effectiveness for at least a million grid points.

Implement a new MPI parallel version of CABARET using data passing protocols between internal boundaries (cell faces and sides).

Validate and test the new code using a 3D backward-facing step case while expecting 70% parallel efficiency when using 256 cores of the XT4 part of HECToR with the Phase 2a architecture.

Demonstrate that the new code works for at least 50 million grid points on the XT4 part of HECToR, using the 3D backward-facing step case.

This work began March 2009 with the end goal being a scalable parallel code which would facilitate much larger simulations than possible with the serial code. By implementing MPI, the data would be located over many distributed processing nodes on HECToR using optimum communication that would enable the application to run simulations at least 100 times larger than previously and with a much faster turn around time.

This report will discuss the development of the CABARET code in relation to: