[gmx-users] Fwd: Question about GPU acceleration in GROMACS 5

Tomy van Batis tomyvanbatis at gmail.com
Fri Dec 12 14:00:45 CET 2014


Dear all

I am working with a system of about 200.000 particles. All the non-bonded
interactions on the system are Lennard-Jones type (no Coulomb). I constrain
the bond-length with Lincs. No torsion or bending interactions are taken
into account.


I am running the simulations on a 4-core Xeon® E5-1620 vs @ 3.70GHz
together with an NVIDIA Tesla K20Xm. I observe a strange behavior when
looking to performance of the simulations:


1. Running in 4 cores+gpu

GPU/CPU force evaluation time=9.5 and GPU usage=58% (I see that with the
command nvidia-smi)


[image: Inline image 1]



2. Running in 2 cores+gpu

GPU/CPU force evaluation time=9.9 and GPU usage=45-50% (Image is not
included due to size restrictions)



The situation doesn't change if I include the option -nd gpu (or gpu_cpu)
in the mdrun.


I can see in the mailing list that the force evaluation time should be
about 1, that means that I am far away from the optimal performance.


Does anybody have any suggestions about how to improve the computational
speed?


Thanks in advance,

Tommy


More information about the gromacs.org_gmx-users mailing list