Parallelization of computational tools for the no core shell model
dc.contributor.author | Hansson, Johannes | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för fysik | sv |
dc.contributor.examiner | Forssén, Christian | |
dc.contributor.supervisor | Forssén, Christian | |
dc.date.accessioned | 2022-06-15T13:41:07Z | |
dc.date.available | 2022-06-15T13:41:07Z | |
dc.date.issued | 2022 | sv |
dc.date.submitted | 2020 | |
dc.description.abstract | The many-body Schrödinger equation is of fundamental importance in nuclear physics. It can be (approximately) solved by employing the no core shell model (NCSM). Using this numerical method in nuclear simulations is a computationally heavy task that requires powerful computer hardware and purpose-built software. In this project we use the NCSM code JupiterNCSM to study an eight-nucleon system, 8Be. This research code has previously only been used for nuclei with A 6 nucleons, and our extension to study an eight-nucleon system represents a significant increase in computational requirements. Therefore, code performance is a key factor. We find that a single computer node running JupiterNCSM has insufficient computational power to simulate 8Be at large model spaces. To overcome this limitation we introduce distributed-memory parallelization to JupiterNCSM using the Message Passing Interface, which makes it possible to use the computational capabilities of several computer nodes at the same time. In addition, we optimize other areas of the code, such as data access patterns, to further increase performance. As a result we have managed to extend the predictive reach of the JupiterNCSM software, enabling the study of atomic nuclei in larger model spaces. With the memory architecture and code optimization improvements in place, we then use JupiterNCSM to sample the posterior predictive distribution (PPD) of the decay energy threshold of 8Be. We find that our NCSM computations are not fully converged, which leads to low-precision predictions manifested by a wide PPD. Still, the predictions are accurate since they reproduce the experimentally measured decay energy. Future efforts to reach even larger model spaces and achieve better convergence are suggested by identifying the most relevant areas for further improvement. | sv |
dc.identifier.coursecode | TIFX61 | sv |
dc.identifier.uri | https://hdl.handle.net/20.500.12380/304723 | |
dc.language.iso | eng | sv |
dc.setspec.uppsok | PhysicsChemistryMaths | |
dc.subject | nuclear physics | sv |
dc.subject | no core shell model | sv |
dc.subject | Lanczos algorithm | sv |
dc.subject | large-scale matrix diagonalization | sv |
dc.subject | parallelization | sv |
dc.subject | high performance computing | sv |
dc.subject | MPI | sv |
dc.title | Parallelization of computational tools for the no core shell model | sv |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.uppsok | H |