[gmx-users] A problem with potential energy values and openMP
Mark Abraham
mark.j.abraham at gmail.com
Thu Oct 8 10:44:39 CEST 2015
Hi,
This is normal. See
http://www.gromacs.org/Documentation/Terminology/Reproducibility. You will
observe similar differences if you run with different numbers of MPI ranks.
Mark
On Thu, Oct 8, 2015 at 8:40 AM Milko Vesterinen <
milko.j.vesterinen at student.jyu.fi> wrote:
> Dear Gromacs users,
>
> I have been working with Gromacs v. 5.0.4 by
> using its MPI/OPENMP – parallelization. While studying outputs of
> mdrun with input argument ”ntomp” (shell command ”gmx_mpi mdrun
> -ntomp i” where i = 1,2 or 4), the potential energy values seemed
> to differ. My typical demo run included 100 steps and the potential energy
> values were checked for the first (initial) step.
>
> Below, the obtained potential energy values are presented for two different
> energy groups and for three different ”ntomp” values (unit for potential
> energy: kJ/mol)
>
> System Prot – Sol
>
> ntomp: 1 -193754.7 -193733.2
>
> ntomp: 2 -193741.3 -193795.4
>
> ntomp: 4 (”default” run) -193744.7 -193758.7
>
> A more detailed investigation showed that the deviations can be seen in the
> values of short – range Coulomb forces (other Coulomb short - range values
> than the values of ”SOL-SOL” system in the ”Prot-Sol” simulations differed
> less than 1 kJ/mol. The values in "SOL-SOL" system
> mostly induced the differences). I use ”PME” as ”coulombtype” here, but the
> potential energy values also differed when using ”cut-off” and again, the
> differences were noticed especially in the case of the short – range
> Coulomb forces.
>
> The question is: can one expect such differences in potential energy values
> when using openMP parallelization or is this a simple user error?
>
> Gromacs was run in a server for which detailed
> information is printed below by using command ”lscpu”:
>
> "
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 4
> On-line CPU(s) list: 0-3
> Thread(s) per core: 1
> Core(s) per socket: 4
> Socket(s): 1
> NUMA node(s): 1
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 44
> Stepping: 2
> CPU MHz: 2400.000
> BogoMIPS: 4799.98
> Virtualization: VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 12288K
> NUMA node0 CPU(s): 0-3
> "
>
> For example, the ”default” run (with shell command ”gmx_mpi mdrun”)
> produces the following info:
>
> ”
> Using 1 MPI process
> Using 4 OpenMP threads
> No GPUs detected on host xxx.xxx.jyu.fi
> ”
>
> Thank you.
>
> M-J Vesterinen
>
> University of Jyväskylä
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list