Abstract

The computing power available nowadays to the average Monte-Carlo-code user is sufficient to perform large-scale neutron transport simulations, such as full-core burnup or high-fidelity multiphysics. In practice however, software limitations in the majority of the available Monte Carlo codes result in a low efficiency when running in High Performance Computing (HPC) environments, the main issues being inadequate memory utilization and poor scalability. The traditional parallel processing scheme based of splitting particle histories among processes requires domain replication across nodes, and therefore the memory demand for each computing node does not scale, and a memory bottleneck appears for large-scale problems. The scalability of this approach usually limits the resources that can be used efficiently to a small number of nodes/processors. Consequently, massively parallel execution is not viable with particle-based parallelism, at least not by itself. In this work we propose a Spatial Domain Decomposition (SDD) approach to develop an efficient and scalable Monte Carlo neutron transport algorithm. Breaking down the geometry into subdomains, a distributed memory scheme can be used to reduce the in-node memory demand, allowing the simulation of large-scale memory-intensive problems. Additionally, with an efficient neutron tracking algorithm the overall speedup can be significantly improved.

abstract = "The computing power available nowadays to the average Monte-Carlo-code user is sufficient to perform large-scale neutron transport simulations, such as full-core burnup or high-fidelity multiphysics. In practice however, software limitations in the majority of the available Monte Carlo codes result in a low efficiency when running in High Performance Computing (HPC) environments, the main issues being inadequate memory utilization and poor scalability. The traditional parallel processing scheme based of splitting particle histories among processes requires domain replication across nodes, and therefore the memory demand for each computing node does not scale, and a memory bottleneck appears for large-scale problems. The scalability of this approach usually limits the resources that can be used efficiently to a small number of nodes/processors. Consequently, massively parallel execution is not viable with particle-based parallelism, at least not by itself. In this work we propose a Spatial Domain Decomposition (SDD) approach to develop an efficient and scalable Monte Carlo neutron transport algorithm. Breaking down the geometry into subdomains, a distributed memory scheme can be used to reduce the in-node memory demand, allowing the simulation of large-scale memory-intensive problems. Additionally, with an efficient neutron tracking algorithm the overall speedup can be significantly improved.",

N2 - The computing power available nowadays to the average Monte-Carlo-code user is sufficient to perform large-scale neutron transport simulations, such as full-core burnup or high-fidelity multiphysics. In practice however, software limitations in the majority of the available Monte Carlo codes result in a low efficiency when running in High Performance Computing (HPC) environments, the main issues being inadequate memory utilization and poor scalability. The traditional parallel processing scheme based of splitting particle histories among processes requires domain replication across nodes, and therefore the memory demand for each computing node does not scale, and a memory bottleneck appears for large-scale problems. The scalability of this approach usually limits the resources that can be used efficiently to a small number of nodes/processors. Consequently, massively parallel execution is not viable with particle-based parallelism, at least not by itself. In this work we propose a Spatial Domain Decomposition (SDD) approach to develop an efficient and scalable Monte Carlo neutron transport algorithm. Breaking down the geometry into subdomains, a distributed memory scheme can be used to reduce the in-node memory demand, allowing the simulation of large-scale memory-intensive problems. Additionally, with an efficient neutron tracking algorithm the overall speedup can be significantly improved.

AB - The computing power available nowadays to the average Monte-Carlo-code user is sufficient to perform large-scale neutron transport simulations, such as full-core burnup or high-fidelity multiphysics. In practice however, software limitations in the majority of the available Monte Carlo codes result in a low efficiency when running in High Performance Computing (HPC) environments, the main issues being inadequate memory utilization and poor scalability. The traditional parallel processing scheme based of splitting particle histories among processes requires domain replication across nodes, and therefore the memory demand for each computing node does not scale, and a memory bottleneck appears for large-scale problems. The scalability of this approach usually limits the resources that can be used efficiently to a small number of nodes/processors. Consequently, massively parallel execution is not viable with particle-based parallelism, at least not by itself. In this work we propose a Spatial Domain Decomposition (SDD) approach to develop an efficient and scalable Monte Carlo neutron transport algorithm. Breaking down the geometry into subdomains, a distributed memory scheme can be used to reduce the in-node memory demand, allowing the simulation of large-scale memory-intensive problems. Additionally, with an efficient neutron tracking algorithm the overall speedup can be significantly improved.