[gmx-users] Re: Segmentation fault, mdrun_mpi

Justin Lemkul jalemkul at vt.edu
Mon Oct 8 12:10:52 CEST 2012



On 10/8/12 4:39 AM, Ladasky wrote:
> Justin Lemkul wrote
>> My first guess would be a buggy MPI implementation.  I can't comment on
>> hardware
>> specs, but usually the random failures seen in mdrun_mpi are a result of
>> some
>> generic MPI failure.  What MPI are you using?
>
> I am using the OpenMPI package, version 1.4.3.  It's one of three MPI
> implementations which are included in the standard repositories of Ubuntu
> Linux 11.10.  I can also obtain MPICH2 and gromacs-mpich without jumping
> through too many hoops.  It looks like LAM is also available.  However, if
> GROMACS needs a special package to interface with LAM, it's not in the
> repositories.
>

This all seems reasonable.  I asked about the MPI implementation because people 
have previously reported that using LAM (which is really outdated) causes random 
seg faults and errors.  I would not necessarily implicate OpenMPI, as I use it 
routinely.  I never use repositories (I always compile from source) as I have 
gotten buggy packages in the past, but I don't know if that's relevant here or 
not.  I'm not trying to implicate the package maintainer in any way, just noting 
that long ago (5-6 years) the Gromacs package had some issues.

-Justin

> Alternately, I could drop using the external MPI for now and just use the
> new multi-threaded GROMACS defaults.  I was trying to prepare for longer
> runs on a cluster, however.  If those runs are going to crash, I had better
> know about it now.
>
>
>
> --
> View this message in context: http://gromacs.5086.n6.nabble.com/Segmentation-fault-mdrun-mpi-tp5001601p5001776.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
>

-- 
========================================

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================



More information about the gromacs.org_gmx-users mailing list