Abstract: |
In recent decades, galaxy simulations have found the interdependence of multiscale gas physics, such as star formation, stellar feedback, inflow/outflow, and so on, by improving the physical models and resolution. Still, so-called sub-grid models, simplified or calibrated to specific summary statistics, have been widely used due to the lack of resolutions and scalability. Even with zoom-in simulations targeting Milky-Way-sized galaxies, the mass resolution remains capped at around 1,000 solar masses (e.g., Applebaum et al. 2021), comparable to the mass of molecular clouds. To overcome the limitations, we are developing a new N-body/SPH code, ASURA-FDPS, to leverage exascale computing (e.g., Fugaku), handle approximately one billion particles, and simulate individual stars and stellar feedback within the galaxy. However, achieving sufficient parallelization efficiency presents challenges. Conventional codes have attempted hierarchical individual time-stepping methods to decrease calculation costs, but the emergence of communication costs hinders scalability beyond one thousand CPU cores. One of the causes is short timescale events localized in tiny regions, such as supernova explosions. In response, we have coupled a surrogate model using machine learning for high-resolution galaxy simulations (Hirashima et al., 2024). In this presentation, I will present our new approach to reduce the computational cost and current state of our development and performance of Milky-Way sized galaxy simulations using the full system of Fugaku.
|