[gmx-users] Segmentation Fault (Address not mapped)

darrellk at ece.ubc.ca darrellk at ece.ubc.ca
Wed Jul 15 08:34:00 CEST 2009


Hi Justin,
I was experiencing the problem before someone suggested using editconf so
I do not think the problem is being caused by editconf. But anyway here
is my editconf command. Let me know if you a source of error in this
command line.

editconf -f graphene.gro -n index.ndx -o graphene_ec.gro

I did not want to add in additional space between the solvent and the box
as I saw no reason for doing so. And hence that is why I originally did
not use editconf.

My box dimensions are 38nm x 38nm x 38nm. I used cutoffs of 2 nm & 5 nm
for my system so ensure the cutoff occured at a distance where the
potentials were stabalized (not changing). I guess I could use shorter
cutoffs such as 1.5 nm & 2 nm and this may decrease my computation time.
I also thought that I needed to use larger cut-offs since I am dealing
in the gas phase and there is greater ditance between the atoms in my
simulation than in liquid-based simulations.

In the .log files, I do not see any LINCS warnings or neighborlist
errors.

I ran gmxcheck on a .trr file and was presented with the following
output:
*********************************************
Checking file mdtraj.trr
trn version: GMX_trn_file (single precision)
Reading frame 0 time 0.000
# Atoms 10482
Last frame 5 time 1.000


Item #frames Timestep (ps)
Step 6 0.2
Time 6 0.2
Lambda 6 0.2
Coords 6 0.2
Velocities 6 0.2
Forces 0
Box 6 0.2
*********************************************

I ran two additional simulations with different values for nsteps and
nstxxxx paramaters and have the following to report:

When I run a simulation with the following parameters it completes
successfully and I see, in the log file, the system output every 100
time steps.
nsteps          =10000
nstcomm         =100
nstxout         =100
nstfout         =0
nstlog          =100
nstenergy       =100
nstlist         =100

When I run a simulation with the following parameters it fails with a
sementation fault and, in the log file, I do not see system output every
500 time steps.
nsteps          =30000
nstcomm         =500
nstxout         =500
nstfout         =0
nstlog          =500
nstenergy       =500
nstlist         =500

Please let me know what you think might be the problem.

Thank you very much.

Darrell


>Date: Mon, 13 Jul 2009 15:37:15 -0400
>From: "Justin A. Lemkul" <jalemkul at vt.edu>
>Subject: Re: [gmx-users] Segmentation Fault (Address not mapped)
>To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>Message-ID: <4A5B8CEB.4020609 at vt.edu>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>
>
>darrellk at ece.ubc.ca wrote:
>> Hi Mark,
>> I used editconf on my .gro file with zero space between my solvent and
>> the box and the resulting box had the exact same dimension as the
>> initial box. I also performed a number of simulation runs with different
>
>If you're using editconf to define zero space, what's the point?  I only ask
>because it is a potential source of error if you think you're adding zero space,
>but something else might be going on.  Maybe you can post your editconf command
>line.
>
>What are your box dimensions?  Are cut-off lengths of 2.0 and 5.0 nm appropriate
>for your system?  How did you determine that these cut-off's should be used?
>
>> mdp parameters hoping this would provide me some indication of the cause
>> of the fault but to no avail. I looked through the log files, error
>> files, and output files and could not find any output to help me
>> identify the source of my error.
>>
>
>It is very odd that Gromacs isn't report anything at all.  No LINCS warnings?
>No neighborlist errors?  These would be in the .log file.
>
>> Could you please let me know how I can look at my structure at each point
>> as you indicate below as I do not see any files output that provide me
>> to do so? I tried to look at the .trr file but when I try to load it
>> into VMD, it causes an error. I am assuming this error is caused because
>> the .trr file did not complete correctly due to the segmentation fault.
>> Please advise.
>>
>
>How early is the segmentation fault occurring?  I have found it useful sometimes
>to set nstxout (or nstxtcout) = 1 to try to catch the first few frames if the
>explosion is occurring early.  In any case, gmxcheck will help determine how
>many frames are present, as well as the integrity of the file (broken frames, etc).
>
>-Justin
>
>> Thanks.
>>
>> Darrell
>>
>>> Date: Tue, 07 Jul 2009 09:19:42 +1000
>>> From: Mark Abraham <Mark.Abraham at anu.edu.au>
>>> Subject: Re: [gmx-users] Segmentation Fault (Address not mapped)
>>> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>>> Message-ID: <4A52868E.6010807 at anu.edu.au>
>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>
>>> darrellk at ece.ubc.ca wrote:
>>>> Hi Mark,
>>>> I added the energy group exclusions as indicated in your previous
>>>> response but am still experiencing the same problem. I looked at the
>>>> .log files and see that in one log file it tells me that my box is
>>>> exploding. However, I do not have many molecules in my simulation and
>>>> therefore do not think that it is possible that my box is exploding from
>>>> pressure.
>>> Sure, but if there's something malformed with your model physics or
>>> starting configuration, then large forces can make anything explode.
>>>
>>> Look at your structures at each point and see where things start to go
>>> wrong. Make sure you've used editconf on your starting structure to
>>> provide the right box dimensions.
>>>
>>> Mark
>>>
>>>> Maybe if I re-state my simulation it will help you in providing me
>>>> direction on what might be causing the problem. My simulation consists
>>>> of a graphene lattice with a layer of ammonia molecules above it. The
>>>> box is very large and there is lots of empty space in the box. So I am a
>>>> little confused as to how the box could be exploding.
>>>>
>>>> Thanks again in advance for your help.
>>>>
>>>> Darrell Koskinen
>>>>
>>>>> Date: Fri, 03 Jul 2009 11:41:45 +1000
>>>>> From: Mark Abraham <Mark.Abraham at anu.edu.au>
>>>>> Subject: Re: [gmx-users] Segmentation Fault (Address not mapped)
>>>>> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>>>>> Message-ID: <4A4D61D9.6080700 at anu.edu.au>
>>>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>>>
>>>>> darrellk at ece.ubc.ca wrote:
>>>>>> Dear GROMACS Gurus,
>>>>>> I am experiencing a segmentation fault when mdrun executes. My simulation
>>>>>> has a graphene lattice with an array (layer) of ammonia molecules above
>>>>>> it. The box is three times the width of the graphene lattice, three
>>>>>> times the length of the graphene lattice, and three times the height
>>>>>> between the graphene lattice and the ammonia molecules. I am including
>>>>>> the mdp file and the error message.
>>>>> Probably your system is exploding when integration fails with excessive
>>>>> forces. You should look at the bottom of stdout, stderr, *and* the .log
>>>>> file to diagnose. The error message you give below is merely the
>>>>> diagnostic trace from the MPI library, and it not useful for finding out
>>>>> what GROMACS thinks the problem might be. Further advice below.
>>>>>
>>>>>> ***************************************************************************
>>>>>> .mdp file
>>>>>> title           =FWS
>>>>>> ;warnings       =10
>>>>>> cpp             =cpp
>>>>>> ;define         =-DPOSRES
>>>>>> ;constraints    =all-bonds
>>>>>> integrator      =md
>>>>>> dt              =0.002 ; ps
>>>>>> nsteps          =100000
>>>>>> nstcomm         =1000
>>>>>> nstxout         =1000
>>>>>> ;nstvout                =1000
>>>>>> nstfout         =0
>>>>>> nstlog          =1000
>>>>>> nstenergy       =1000
>>>>>> nstlist         =1000
>>>>>> ns_type         =grid
>>>>>> rlist           =2.0
>>>>>> coulombtype     =PME
>>>>>> rcoulomb        =2.0
>>>>>> vdwtype         =cut-off
>>>>>> rvdw            =5.0
>>>>>> fourierspacing  =0.12
>>>>>> fourier_nx      =0
>>>>>> fourier_ny      =0
>>>>>> fourier_nz      =0
>>>>>> pme_order       =4
>>>>>> ewald_rtol      =1e-5
>>>>>> optimize_fft    =yes
>>>>>>
>>>>>> ; This section added in to freeze hydrogen atoms at edge of graphene
>>>>>> lattice to prevent movement of lattice
>>>>>> ;energygrp_excl = Edge Edge Edge Grph Grph Grph
>>>>>> freezegrps      = Edge Grph ; Hydrogen atoms in graphene lattice are
>>>>>> associated with the residue Edge
>>>>> See comments in 7.3.24 of manual. You need the energy group exclusions.
>>>>>
>>>>> Mark
>>>>>
>>>>>> freezedim       = Y Y Y Y Y Y; Freeze hydrogen atoms in all directions
>>>>>>
>>>>>> ;Tcoupl         =berendsen
>>>>>> ;tau_t          =0.1    0.1
>>>>>> ;tc-grps                =protein non-protein
>>>>>> ;ref_t = 300 300
>>>>>>
>>>>>> ;Pcoupl = parrinello-rahman
>>>>>> ;tau_p = 0.5
>>>>>> ;compressibility = 4.5e-5
>>>>>> ;ref_p = 1.0
>>>>>>
>>>>>> ;gen_vel = yes
>>>>>> ;gen_temp = 300.0
>>>>>> ;gen_seed = 173529
>>>>>> ***************************************************************************
>>>>>>
>>>>>> ***************************************************************************
>>>>>> ERROR IN OUTPUT FILE
>>>>>> [node16:25758] *** Process received signal ***
>>>>>> [node16:25758] Signal: Segmentation fault (11)
>>>>>> [node16:25758] Signal code: Address not mapped (1)
>>>>>> [node16:25758] Failing at address: 0xfffffffe1233e230
>>>>>> [node16:25758] [ 0] /lib64/libpthread.so.0 [0x3834a0de80]
>>>>>> [node16:25758] [ 1] /usr/lib64/libmd_mpi.so.4(pme_calc_pidx+0xd6)
>>>>>> [0x2ba295dd0606]
>>>>>> [node16:25758] [ 2] /usr/lib64/libmd_mpi.so.4(do_pme+0x808)
>>>>>> [0x2ba295dd4058]
>>>>>> [node16:25758] [ 3] /usr/lib64/libmd_mpi.so.4(force+0x8de)
>>>>>> [0x2ba295dba5be]
>>>>>> [node16:25758] [ 4] /usr/lib64/libmd_mpi.so.4(do_force+0x5ef)
>>>>>> [0x2ba295ddeaff]
>>>>>> [node16:25758] [ 5] mdrun_mpi(do_md+0xe23) [0x411193]
>>>>>> [node16:25758] [ 6] mdrun_mpi(mdrunner+0xd40) [0x4142f0]
>>>>>> [node16:25758] [ 7] mdrun_mpi(main+0x239) [0x4146f9]
>>>>>> [node16:25758] [ 8] /lib64/libc.so.6(__libc_start_main+0xf4)
>>>>>> [0x3833e1d8b4]
>>>>>> [node16:25758] [ 9] mdrun_mpi [0x40429a]
>>>>>> [node16:25758] *** End of error message ***
>>>>>> mpirun noticed that job rank 7 with PID 25758 on node node16 exited on
>>>>>> signal 11 (Segmentation fault).
>>>>>> 7 processes killed (possibly by Open MPI)
>>>>>> ***************************************************************************
>>>>>>
>>>>>> Could you please let me know what you think may be causing the fault?
>>>>>>
>>>>>> Much thanks in advance.
>>>>>>
>>>>>> Darrell Koskinen
>> _______________________________________________
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>
>--
>========================================
>
>Justin A. Lemkul
>Ph.D. Candidate
>ICTAS Doctoral Scholar
>Department of Biochemistry
>Virginia Tech
>Blacksburg, VA
>jalemkul[at]vt.edu | (540) 231-9080
>http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
>========================================



More information about the gromacs.org_gmx-users mailing list