[gmx-users] atom moved too far
Christos Deligkaris
deligkaris at gmail.com
Mon Jan 13 15:33:02 CET 2020
Justin thank you.
I installed gromacs 2020, just gmx, (not gmx_mpi) and the simulation
is currently at ~75 ns so I think that solved the problem.
It seems to me that I either did something wrong installing gmx_mpi
(gromacs 2018) or I should not run gmx_mpi on a single node.
gmx 2020 also gives ~25% speed up compared to gmx_mpi 2018 (on 12
cores, single node).
Best wishes,
Christos Deligkaris, PhD
On Mon, Jan 6, 2020 at 8:30 PM Justin Lemkul <jalemkul at vt.edu> wrote:
>
>
>
> On 1/6/20 2:59 PM, Christos Deligkaris wrote:
> > Justin, thank you.
> >
> > I have implemented the pull code but that also exhibits the same error when
> > I use 12 cores (failed at about 2ns) and the simulation goes on fine when I
> > use 6 cores (now at about 32 ns).
> >
> > I tried using the v-rescale thermostat (instead of Nose-Hoover) and
> > Parinello-Rahman barostat, which failed. I also tried the v-rescale
> > thermostat and the Berendsen barostat but that also failed. It seems to me
> > that this is not an equilibration issue.
> >
> > So, to summarize, only if I decrease the time step to 0.001 ps or decrease
> > the number of cores seem to allow the calculation to proceed.
> >
> > In this email list, I read that someone else was trying to use different
> > arguments supplied to mdrun (-nt, -ntomp etc) to solve the same problem. Is
> > it possible that the problem arises due to my running gmx_mpi on a single
> > node? This is the command I use in my submission script:
> >
> > mpirun --mca btl tcp,sm,self /opt/gromacs-2018.1/bin/gmx_mpi mdrun -ntomp
> > \$ntomp -v -deffnm "${inputfile%.tpr}"
> >
> > If you think that this is not due to a physics issue I can continue doing
> > calculations with 6 cores and try to install gromacs 2020 (both gmx and
> > gmx_mpi) to see if my problem persists there or not...
>
> If you're running on a single node, there's no need for an external MPI
> library. Perhaps you've got a buggy implementation? Have you tried using
> 12 cores via the built-in thread MPI library?
>
> -Justin
>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalemkul at vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==================================================
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list