Parallelization of computational tools for the no core shell model
Examensarbete för masterexamen
The many-body Schrödinger equation is of fundamental importance in nuclear physics. It can be (approximately) solved by employing the no core shell model (NCSM). Using this numerical method in nuclear simulations is a computationally heavy task that requires powerful computer hardware and purpose-built software. In this project we use the NCSM code JupiterNCSM to study an eight-nucleon system, 8Be. This research code has previously only been used for nuclei with A 6 nucleons, and our extension to study an eight-nucleon system represents a significant increase in computational requirements. Therefore, code performance is a key factor. We find that a single computer node running JupiterNCSM has insufficient computational power to simulate 8Be at large model spaces. To overcome this limitation we introduce distributed-memory parallelization to JupiterNCSM using the Message Passing Interface, which makes it possible to use the computational capabilities of several computer nodes at the same time. In addition, we optimize other areas of the code, such as data access patterns, to further increase performance. As a result we have managed to extend the predictive reach of the JupiterNCSM software, enabling the study of atomic nuclei in larger model spaces. With the memory architecture and code optimization improvements in place, we then use JupiterNCSM to sample the posterior predictive distribution (PPD) of the decay energy threshold of 8Be. We find that our NCSM computations are not fully converged, which leads to low-precision predictions manifested by a wide PPD. Still, the predictions are accurate since they reproduce the experimentally measured decay energy. Future efforts to reach even larger model spaces and achieve better convergence are suggested by identifying the most relevant areas for further improvement.
nuclear physics , no core shell model , Lanczos algorithm , large-scale matrix diagonalization , parallelization , high performance computing , MPI