[gmx-users] GPU-accelerated EM

Alex nedomacho at gmail.com
Fri Sep 8 23:53:30 CEST 2017


Hi all,

I tend to run one fat node-wise simulation, but we have another user whose
use case is different. He runs a bunch of small jobs at the same time and
in his case most of the time is taken up by the EM. We have two
hyperthreaded Xeons E5 (44 cores total) and 3 GPUs.

So, we try to run 10 jobs using 4 cores each. Each mdrun line includes

-ntomp 4 -pin on -pinoffset x -pinstride 1 -gpu_id y

where pinoffset is 0, 4, 8, etc for the first, second, etc, simulation and
y changes from 0 to 2 so that the first two GPUs are subscribed three
times, while the third one is subscribed four times.

Upon submitting this batch, each mdrun instance utilizes something like 10%
of CPU and the EM jobs proceed slowly. Removing GPU and setting -nb cpu
seems to fix that. With MD, everything works properly, but with EM there's
no GPU acceleration and low CPU usage. Is this correct behavior?

Thanks,

Alex


More information about the gromacs.org_gmx-users mailing list