[gmx-users] Effect of pressure coupling frequency on gpu simulations

Mark Abraham mark.j.abraham at gmail.com
Thu May 23 08:46:38 CEST 2013


http://dx.doi.org/10.1021/ct300688p is probably very useful here.

Mark


On Thu, May 23, 2013 at 4:29 AM, Trayder Thomas
<trayder.thomas at monash.edu>wrote:

> Thanks Mark,
> That really helped to clarify how everything is interacting around the
> verlet scheme.
> What statistics do you recommend examining between nstpcouple settings?
> Pressure/box size variation is the obvious one but I was curious whether
> you had something else in mind.
> -Trayder
>
>
> On Thu, May 23, 2013 at 4:18 AM, Mark Abraham <mark.j.abraham at gmail.com
> >wrote:
>
> > On Wed, May 22, 2013 at 6:32 AM, Trayder <trayder.thomas at monash.edu>
> > wrote:
> >
> > > Hi all,
> > > I've been running 5fs timestep simulations successfully without gpus
> > > (united-atom, HEAVYH). When continuing the same simulations on a gpu
> > > cluster
> > > utilising the verlet cutoff-scheme they crash within 20 steps. Reducing
> > the
> > > timestep to 2fs runs smoothly, however I noticed the message:
> > >
> > >
> > >
> > > Making this change manually led to crashing simulations as nstcalclr,
> > > nsttcouple and nstpcouple default to the value of nstlist. After
> defining
> > > them all separately I was able to determine that the simulation
> exploding
> > > was dependent entirely on nstpcouple and by lowering it to 5 (from the
> > > default 10) I was able to run simulations at a 5fs timestep.
> > >
> > > So, my questions: Is lowering nstpcouple a legitimate solution or just
> a
> > > bandaid?
> > >
> >
> > P-R does not cope well with situations where the box size changes enough
> > (e.g. you should normally avoid it during equilibration). nstpcouple != 1
> > means that you simulate on an NVE manifold for a period of time (maybe
> with
> > some T changes if nsttcouple != nstpcouple), and I'd suppose the longer
> > that interval the bigger the chance of a build-up of pressure that P-R
> will
> > then try to relieve by changing the box size. Larger nstlist and dt will
> > exacerbate this, of course. I would recommend you experiment and see how
> > far you can push things and keep statistics that still resemble those
> with
> > small nstpcouple. Larger nstpcouple helps reduce the frequency with which
> > global communication occurs, and that affects your simulation rate...
> life
> > is complex!
> >
> > It would be nice if we were able to compute heuristics so that mdrun
> could
> > anticipate such a problem and warn you, but off-hand that seems a tricky
> > problem...
> >
> > The simulation runs with nstcalclr and nsttcouple set to 50 along with
> > >
> >
> > nstcalclr should have no effect - it works only with the group scheme,
> > which does not work on GPUs.
> >
> >
> > > nstlist. Is nstlist the only setting that should be increased when
> > > utilising
> > > gpus?
> > >
> >
> > Yes, AFAIK. The point is that nstlist is the interval between neighbour
> > searches, and (at the moment at least) that's only done on the CPU. The
> > Verlet kernels cheerfully compute lots of zero-strength interactions
> > outside the cutoff (by design), and depending on the relative performance
> > of your hardware it is normally more efficient to bump nstlist up (and
> > rlist accordingly, to provide a larger buffer for diffusion of particles)
> > and compute more zeroes than it is to search for neighbours more often.
> >
> > Mark
> >
> >
> > >
> > > Thanks in advance,
> > > -Trayder
> > >
> > > P.S. The working mdp file:
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://gromacs.5086.x6.nabble.com/Effect-of-pressure-coupling-frequency-on-gpu-simulations-tp5008439.html
> > > Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> > > --
> > > gmx-users mailing list    gmx-users at gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-request at gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list