[gmx-users] A problem with potential energy values and openMP

Milko Vesterinen milko.j.vesterinen at student.jyu.fi
Thu Oct 8 08:39:38 CEST 2015


Dear Gromacs users,

I have been working with Gromacs v. 5.0.4 by
using its MPI/OPENMP – parallelization. While studying outputs of
mdrun with input argument ”ntomp” (shell command ”gmx_mpi mdrun
-ntomp i” where i = 1,2 or 4), the potential energy values seemed
to differ. My typical demo run included 100 steps and the potential energy
values were checked for the first (initial) step.

Below, the obtained potential energy values are presented for two different
energy groups and for three different ”ntomp” values (unit for potential
energy: kJ/mol)

                            System                    Prot – Sol

ntomp: 1                    -193754.7                -193733.2

ntomp: 2                     -193741.3                -193795.4

ntomp: 4 (”default” run)    -193744.7                -193758.7

A more detailed investigation showed that the deviations can be seen in the
values of short – range Coulomb forces (other Coulomb short - range values
than the values of ”SOL-SOL” system in the ”Prot-Sol” simulations differed
less than 1 kJ/mol. The values in "SOL-SOL" system
mostly induced the differences). I use ”PME” as ”coulombtype” here, but the
potential energy values also differed when using ”cut-off” and again, the
differences were noticed especially in the case of the short – range
Coulomb forces.

The question is: can one expect such differences in potential energy values
when using openMP parallelization or is this a simple user error?

Gromacs was run in a server for which detailed
information is printed below by using command ”lscpu”:

"
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 44
Stepping:              2
CPU MHz:               2400.000
BogoMIPS:              4799.98
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              12288K
NUMA node0 CPU(s):     0-3
"

For example, the ”default” run (with shell command ”gmx_mpi mdrun”)
produces the following info:

”
Using 1 MPI process
Using 4 OpenMP threads
No GPUs detected on host xxx.xxx.jyu.fi
”

Thank you.

M-J Vesterinen

University of Jyväskylä


More information about the gromacs.org_gmx-users mailing list