[gmx-users] Gromacs 2019 - Ryzen Architecture
Sandro Wrzalek
sandro.wrzalek at fu-berlin.de
Thu Jan 2 13:26:46 CET 2020
Hi,
happy new year!
Now to my problem:
I use Gromacs 2019.3 and to try to run some simulations (roughly 30k
atoms per system) on my PC which has the following configuration:
CPU: Ryzen 3950X (overclocked to 4.1 GHz)
GPU #1: Nvidia RTX 2080 Ti
GPU #2: Nvidia RTX 2080 Ti
RAM: 64 GB
PSU: 1600 Watts
Each run uses one GPU and 16 of 32 logical cores. Doing only one run at
time (gmx mdrun -deffnm rna0 -gpu_id 0 -nb gpu -pme gpu) the GPU
utilization is roughly around 84% but if I add a second run, the
utilization of both GPUs drops to roughly 20%, while leaving logical
cores 17-32 idle (I changed parameter gpu_id, accordingly).
Adding additional parameters for each run:
gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 0 -gpu_id 0 -nb gpu
-pme gpu
gmx mdrun -deffnm rna0 -nt 16 -pin on -pinoffset 17 -gpu_id 1 -nb gpu
-pme gpu
I get a utilization of 78% per GPU, which is nice but not near the 84% I
got with only one run. In theory, however, it should come at least close
to that utilization.
I suspect, the Ryzen Chiplet design as the culprit since Gromacs seems
to prefer the the first Chiplet, even if two simultaneous simulations
have the resources to occupy both. The reason for the 78% utilization
could be because of overhead between the two Chiplets via the infinity
band. However, I have no proof, nor am I able to explain why gmx mdrun
-deffnm rna0 -nt 16 -gpu_id 0 & 1 -nb gpu -pme gpu works as well - seems
to occupy free logical cores then.
Long story short:
Are there any workarounds to squeeze the last bit out of my setup? Is it
possible to choose the logical cores manually (I did not found anything
in the docs so far)?
Thank you for your help!
Best,
Sandro
More information about the gromacs.org_gmx-users
mailing list