[gmx-users] Error while running protein-carbohydrate simulation
Evan Lowry
evanwlowry at gmail.com
Mon Jul 11 02:09:17 CEST 2016
Can you give me more details about the system? 100 ps is very short and may
not be adequate.
Evan L.
On Jul 10, 2016 5:22 PM, "Deep Bhattacharya" <
deep.bhattacharya1991 at gmail.com> wrote:
> Evan,
> One more question,
> Do I also increase the equilibration time from 100ps to 200ps? Will that
> help along with the 1fs time step?
>
> Sincerely,
> Deep
>
> Sent from my iPhone
>
> > On Jul 10, 2016, at 6:09 PM, Evan Lowry <evanwlowry at gmail.com> wrote:
> >
> > I usually set tau-t=2.0 and tau-p=6.0 for most of my bulk fluid systems.
> > It really depends on what you are simulating so just be sure to check the
> > temperature and pressure to ensure that they are remaining constant and
> > within your desired tolerances.
> >
> > I think you may have to change your pressure algorithm to Berendsen if
> you
> > are using the Nose-Hoover thermostat, but definitely read about that in
> the
> > gromacs manual. Nose-Hoover usually produces an accurate thermodynamic
> > ensemble which is valuable if you are doing anything with energies.
> >
> > Evan L.
> > On Jul 10, 2016 5:00 PM, "Deep Bhattacharya" <hypergenetics at gmail.com>
> > wrote:
> >
> >> Thank you Evan for your response.
> >> What time would you suggest for the coupling time for temperature and
> >> pressure? Yes I will use the Nose-Hoover thermostat and see how the
> >> simulation goes.
> >> So you mean to change like this
> >>
> >>
> >> ; Temperature coupling =
> >>
> >> tcoupl = nose-hoover
> >>
> >> ; Groups to couple separately =
> >>
> >> tc-grps = PROTEIN SOL_Na
> >>
> >> ; Time constant (ps) and reference temperature (K) =
> >>
> >> tau-t = 0.5 0.5
> >>
> >> ref-t = 300 300
> >>
> >> ; Pressure coupling =
> >>
> >> Pcoupl = parrinello-rahman
> >>
> >> Pcoupltype = isotropic
> >>
> >> ; Time constant (ps), compressibility (1/bar) and reference P (bar) =
> >>
> >> tau-p = 5.0
> >>
> >> compressibility = 4.5E-5
> >>
> >> ref-p = 1.0
> >>
> >> Awaiting your response
> >> Thank you.
> >>
> >>
> >> Deep
> >>
> >>> On Sun, Jul 10, 2016 at 5:53 PM, Evan Lowry <evanwlowry at gmail.com>
> wrote:
> >>>
> >>> Try running with a 1 fs time step in stead of 2 fs. Also view your
> >>> trajectory to see if anything is messed up. Try re-minimizing as well.
> If
> >>> that doesn't work, you could change the coupling time for the
> temperature
> >>> and pressure.
> >>>
> >>> On another note, why not use the Nose-Hoover thermostat? It may be more
> >>> accurate for the energies in your system.
> >>>
> >>> Best of luck,
> >>>
> >>> Evan L.
> >>> On Jul 10, 2016 4:45 PM, "Deep Bhattacharya" <hypergenetics at gmail.com>
> >>> wrote:
> >>>
> >>>> Hello,
> >>>> I am trying to simulate the system mentioned but its failing showing
> >> this
> >>>> error message,
> >>>> WARNING: Listed nonbonded interaction between particles 746 and 751
> >>>> at distance 3f which is larger than the table limit 3f nm.
> >>>>
> >>>> This is likely either a 1,4 interaction, or a listed interaction
> inside
> >>>> a smaller molecule you are decoupling during a free energy
> calculation.
> >>>> Since interactions at distances beyond the table cannot be computed,
> >>>> they are skipped until they are inside the table limit again. You will
> >>>> only see this message once, even if it occurs for several
> interactions.
> >>>>
> >>>> IMPORTANT: This should not happen in a stable simulation, so there is
> >>>> probably something wrong with your system. Only change the
> >>> table-extension
> >>>> distance in the mdp file if you are really sure that is the reason.
> >>>>
> >>>>
> >>>>
> >>>> -------------------------------------------------------
> >>>> Program mdrun, VERSION 4.6.5
> >>>> Source code file: /util/src/gromacs/gromacs-4.6.5/src/mdlib/pme.c,
> >> line:
> >>>> 851
> >>>>
> >>>> Fatal error:
> >>>> 1 particles communicated to PME node 13 are more than 2/3 times the
> >>> cut-off
> >>>> out of the domain decomposition cell of their charge group in
> dimension
> >>> y.
> >>>> This usually means that your system is not well equilibrated.
> >>>> For more information and tips for troubleshooting, please check the
> >>> GROMACS
> >>>> website at http://www.gromacs.org/Documentation/Errors
> >>>> -------------------------------------------------------
> >>>>
> >>>> "Jede der Scherben spiegelt das Licht" (Wir sind Helden)
> >>>>
> >>>> Error on node 13, will try to stop all the nodes
> >>>> Halting parallel program mdrun on CPU 13 out of 16
> >>>>
> >>>> gcq#330: "Jede der Scherben spiegelt das Licht" (Wir sind Helden)
> >>
> --------------------------------------------------------------------------
> >>>> MPI_ABORT was invoked on rank 13 in communicator MPI_COMM_WORLD
> >>>> with errorcode -1.
> >>>>
> >>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> >>>> You may or may not see output from other processes, depending on
> >>>> exactly when Open MPI kills them.
> >>
> --------------------------------------------------------------------------
> >>
> --------------------------------------------------------------------------
> >>>> mpirun has exited due to process rank 13 with PID 117399 on
> >>>> node c0323 exiting improperly. There are two reasons this could occur:
> >>>>
> >>>> 1. this process did not call "init" before exiting, but others in
> >>>> the job did. This can cause a job to hang indefinitely while it waits
> >>>> for all processes to call "init". By rule, if one process calls
> "init",
> >>>> then ALL processes must call "init" prior to termination.
> >>>>
> >>>> 2. this process called "init", but exited without calling "finalize".
> >>>> By rule, all processes that call "init" MUST call "finalize" prior to
> >>>> exiting or it will be considered an "abnormal termination"
> >>>>
> >>>> This may have caused other processes in the application to be
> >>>> terminated by signals sent by mpirun (as reported here).
> >>
> --------------------------------------------------------------------------
> >>>>
> >>>> Here is my .mdp file
> >>>> title = CD44 deHA_sulf_red complex MD simulation
> >>>> ; Run parameters
> >>>> integrator = md ; leap-frog integrator
> >>>> nsteps = 5000000 ; 2 * 5000000 = 10000 ps (10 ns)
> >>>> dt = 0.002 ; 2 fs
> >>>> ; Output control
> >>>> nstxout = 50000 ; suppress .trr output
> >>>> nstvout = 50000 ; suppress .trr output
> >>>> nstenergy = 50000 ; save energies every 100.0 ps
> >>>> nstlog = 50000 ; update log file every 100.0 ps
> >>>> compressed-x-grps = System
> >>>> energygrps = Protein LIG
> >>>> ; Bond parameters
> >>>> continuation = yes ; first dynamics run
> >>>> constraint_algorithm = lincs ; holonomic constraints
> >>>> constraints = all-bonds ; all bonds (even heavy atom-H bonds)
> >>>> constrained
> >>>> lincs_iter = 1 ; accuracy of LINCS
> >>>> lincs_order = 4 ; also related to accuracy
> >>>> ; Neighborsearching
> >>>> cutoff-scheme = Verlet
> >>>> ns_type = grid ; search neighboring grid cells
> >>>> nstlist = 10 ; 20 fs, largely irrelevant with Verlet
> >>>> rcoulomb = 1.4 ; short-range electrostatic cutoff (in nm)
> >>>> rvdw = 1.4 ; short-range van der Waals cutoff (in nm)
> >>>> ; Electrostatics
> >>>> coulombtype = PME ; Particle Mesh Ewald for long-range
> >>>> electrostatics
> >>>> pme_order = 4 ; cubic interpolation
> >>>> fourierspacing = 0.16 ; grid spacing for FFT
> >>>> ; Temperature coupling
> >>>> tcoupl = V-rescale ; modified Berendsen
> >>> thermostat
> >>>> tc-grps = Protein_LIG Water_and_ions ; two coupling groups -
> >> more
> >>>> accurate
> >>>> tau_t = 0.1 0.1 ; time constant, in ps
> >>>> ref_t = 300 300 ; reference temperature,
> >> one
> >>>> for each group, in K
> >>>> ; Pressure coupling
> >>>> pcoupl = Parrinello-Rahman ; pressure coupling is on
> >> for
> >>>> NPT
> >>>> pcoupltype = isotropic ; uniform scaling of box
> >>>> vectors
> >>>> tau_p = 2.0 ; time constant, in ps
> >>>> ref_p = 1.0 ; reference pressure, in
> >> bar
> >>>> compressibility = 4.5e-5 ; isothermal
> >> compressibility
> >>> of
> >>>> water, bar^-1
> >>>> ; Periodic boundary conditions
> >>>> pbc = xyz ; 3-D PBC
> >>>> ; Dispersion correction
> >>>> DispCorr = EnerPres ; account for cut-off vdW scheme
> >>>> ; Velocity generation
> >>>> gen_vel = no ; assign velocities from Maxwell distribution
> >>>>
> >>>> I have performed 100 ps NVT and NPT simulation before doing the MD
> run.
> >>>> Please help
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> *Deep S Bhattacharya*
> >>>>
> >>>> *Graduate Research Assistant*
> >>>>
> >>>> Mohs Biomedical Imaging & Nanotechnology Group
> >>>>
> >>>> Pharmaceutical Sciences
> >>>>
> >>>> *University of Nebraska Medical Center*
> >>>>
> >>>> 4018 Eppley Science Hall | Omaha, NE 68198-6805
> >>>>
> >>>> office 402.559.4349 | cell 402.906.1640
> >>>>
> >>>> deep.bhattacharya at unmc.edu.deep.bhattacharya199@gmail.com
> >>>> --
> >>>> Gromacs Users mailing list
> >>>>
> >>>> * Please search the archive at
> >>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>>> posting!
> >>>>
> >>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>>
> >>>> * For (un)subscribe requests visit
> >>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>>> send a mail to gmx-users-request at gromacs.org.
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
> >>>
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a mail to gmx-users-request at gromacs.org.
> >>
> >>
> >>
> >> --
> >> Deep Bhattacharya.
> >> Post-Graduate Student,
> >> Department of Pharmaceutical Chemistry,
> >> Bombay College Of Pharmacy.
> >> Mobile - +918976129616
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list