[gmx-users] Domain decomposition and large molecules
Tommaso D'Agostino
tommyscp86 at gmail.com
Tue Dec 11 16:19:54 CET 2018
Dear all,
I have a system of 27000 atoms, that I am simulating on both local and
Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
that has a graphene sheet attached to it, surrounded by water. I have
already simulated with success this molecule in a system of 6500 atoms,
using a timestep of 2fs and LINCS algorithm. These simulations have run
flawlessly when executed with 8 mpi ranks.
Now I have increased the length of the graphene part and the number of
waters surrounding my molecule, arriving to a total of 27000 atoms;
however, every simulation that I try to launch on more than 2 cpus or with
a timestep greater than 0.5fs seems to crash sooner or later (strangely,
during multiple attempts with 8 cpus, I was able to run up to 5 ns of
simulations prior to get the crashes; sometimes, however, the crashes
happen as soon as after 100ps). When I obtain an error prior to the crash
(sometimes the simulation just hangs without providing any error) I get a
series of lincs warning, followed by a message like:
Fatal error:
An atom moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated
The crashes are relative to a part of the molecule that I have not changed
when increasing the graphene part, and I already checked twice that there
are no missing/wrong terms in the molecule topology. Again, I have not
modified at all the part of the molecule that crashes.
I have already tried to increase lincs-order or lincs-iter up to 8,
decrease nlist to 1, increase rlist to 5.0, without any success. I have
also tried (without success) to use a unique charge group for the whole
molecule, but I would like to avoid this, as point-charges may affect my
analysis.
One note: I am using a V-rescale thermostat with a tau_t of 40 picoseconds,
and every 50ps the simulation is stopped and started again from the last
frame (preserving the velocities). I want to leave these options as they
are, for consistency with other system used for this work.
Do you have any suggestions on things I may try to launch these simulations
with a decent performance? even with these few atoms, if I do not use a
timestep greater than 0.5fs or if I do not use more than 2 cpus, I cannot
get more than 4ns/day. I think it may me connected with domain
decomposition, but option -pd was removed from last versions of gromacs (I
am using gromacs 2016.1) and I cannot check that.
Attached to this mail, you may find the input .mdp file used for the
simulation.
Thanks in advance for the help,
Tommaso D'Agostino
Postdoctoral Researcher
Scuola Normale Superiore,
Palazzo della Carovana, Ufficio 99
Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
More information about the gromacs.org_gmx-users
mailing list