[gmx-users] Re: Segmentation fault, mdrun_mpi
Justin Lemkul
jalemkul at vt.edu
Sat Oct 6 04:56:25 CEST 2012
On 10/5/12 3:03 PM, Ladasky wrote:
> Bumping this once before the weekend, hoping to get some help.
>
> I am getting segmentation fault errors at 1 to 2 million cycles into my
> production MD runs, using GROMACS 4.5.4. If these errors are a consequence
> of a poorly-equilibrated system, I am no longer getting the right kind of
> error messages to support that conclusion. I am not getting PME or SETTLE
> errors. I am getting a non-descriptive segmentation fault.
>
> I have corrected earlier shortcomings in my equilibration protocols, as
> discussed in this earlier thread:
>
> http://gromacs.5086.n6.nabble.com/Re-Water-molecules-cannot-be-settled-why-tp4999302.html
>
> I am now monitoring the macroscopic properties of my simulation. Potential,
> pressure, density, and temperature convergence and subsequent stability are
> achieved, at least as well as demonstrated in Justin Lemkul's most recent
> tutorial:
>
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/index.html
>
> The trajectories of my simulations do not appear to be radical in any way
> that I can discern. I have a partially-unfolded protein, folding gradually,
> in a box of water with counter-ions.
>
>>From my previous thread, I have come to appreciate just how far from
> equilibrium the initial state of a simulation can be. Also, I have always
> understood that MD simulations are chaotic, and that instabilities can
> result simply from the fact that a continuous system is being modeled in
> discrete time steps. (As an aside, one of my first programming puzzles was
> about exactly this kind of thing. When I was a high-school student, I
> wanted to simulate the orbits of the Moon about the Earth, and the Earth
> about the Sun. It sounded simple enough, just apply the inverse-square law
> for gravity, right? Yet no matter how I tried, I couldn't achieve a stable
> system. Deeper reading led me to the intuitive and quick "leapfrog" method
> of improving differential approximations, which GROMACS apparently uses: and
> the more powerful but slower Runge-Kutta methods, which GROMACS apparently
> does not use.)
>
> By correcting my earlier problems, I have extended the time that my
> simulations will run by a factor of 10-20 fold, out to several nanoseconds.
> That's progress, but I'm never going to get to one microsecond this way.
>
> Any advice is appreciated. Of course I can post MDP files again, as well as
> graphs.
>
Random segmentation faults are really hard to debug. Can you resume the run
using a checkpoint file? That would suggest maybe an MPI problem or something
else external to Gromacs. Without a reproducible system and a debugging
backtrace, it's going to be hard to figure out where the problem is coming from.
-Justin
--
========================================
Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
========================================
More information about the gromacs.org_gmx-users
mailing list