[gmx-users] LAM OpenMPI conflict?

Francesco Pietra francesco.pietra at accademialucchese.it
Thu Oct 15 21:04:16 CEST 2009


On Thu, Oct 15, 2009 at 7:55 PM, Jussi Lehtola
<jussi.lehtola at helsinki.fi> wrote:
> On Thu, 2009-10-15 at 18:23 +0200, Francesco Pietra wrote:
>> Hi: I was trying to minimize in vacuum a pure CG protein on a four
>> core (two dual operons), getting errors:
>>
>> $ lamboot
>>
>> $ mpirun -np 4 mdrun -s mod21.tpr -o minim_mod21_traj.trr -c
>> minim_mod21.gro -e minim_mod21_ener.edr
>> ...................
>> ...................
>>
>> WARNING: Writing ompirun noticed that job rank 2 with PID 4385 on node
>> tya64 exited on signal 11 (Segmentation fault).
>> 3 processes killed (possibly by Open MPI)
>> Writing out atom name (SCSP1) longer than 4 characters to .pdb file
>
> This looks like you are trying to run the LAM binary with Open MPI's
> mpirun command. Use the LAM version instead (mpirun.lam in Debian).
>
> Furthermore, the Debian packages use suffixes, for instance the binaries
> in the gromacs-lam package are /usr/bin/mdrun_mpi.lam
> and /usr/bin/mdrun_mpi_d.lam, so you should switch mdrun to
> mdrun_mpi.lam . So all in all:
>
> $ mpirun.lam -np 4 mdrun_mpi.lam -s mod21.tpr -o minim_mod21_traj.trr -c
> minim_mod21.gro -e minim_mod21_ener.edr

$ lamboot

$ mpirun.lam -np 4 mdrun_mpi.lam -s mod21.tpr -o minim_mod21_traj.trr
-c minim_mod21.gro -e minim_mod21_ener.edr
NNODES=4, MYRANK=0, HOSTNAME=tya64
NNODES=4, MYRANK=1, HOSTNAME=tya64
NNODES=4, MYRANK=2, HOSTNAME=tya64
NNODES=4, MYRANK=3, HOSTNAME=tya64
NODEID=2 argc=9
NODEID=1 argc=9
NODEID=0 argc=9
NODEID=3 argc=9


.............
.............
Program mdrun_mpi.lam, VERSION 3.3.3
Source code file: ../../../../src/mdlib/init.c, line: 69

Fatal error:
run input file mod21.tpr was made for 1 nodes,
             while mdrun_mpi.lam expected it to be for 4 nodes.
-------------------------------------------------------

"Live for Liposuction" (Robbie Williams)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi.lam on CPU 0 out of 4

-----------------------------------------------------------------------------
One of the processes started by mpirun has exited with a nonzero exit
code.  This typically indicates that the process finished in error.
If your process did not finish in error, be sure to include a "return
0" or "exit(0)" in your C code before exiting the application.

PID 5634 failed on node n0 (127.0.0.1) with exit status 1.

===========
Actually, there is only one node with four cpus.

When installing debian-offered gromacs I avoided the OpenMPI version
because of my Intel-installation of OpenMPI. On the other hand, I need
the latter for running amber, so that if I want to use OpenMPI I
should probably compile gromacs by myself. Don't know if the Intel
installation of OpenMPI will be accepted.

Somewhere in gromacs I have read that the command to start a parallel
job is the same for lam- or openmpi-based installation. May be I don't
remember correctly.

thanks
francesco

>
> What is also possible is that your installation of the Intel compiled
> Open MPI is visible in your environment, which may quite well lead into
> problems.
>
> (Also, LAM has been obsoleted by Open MPI years ago, so you might just
> try switching from LAM to Open MPI, then you wouldn't have to run
> lamboot at the beginning.)
> --
> ------------------------------------------------------
> Jussi Lehtola, FM, Tohtorikoulutettava
> Fysiikan laitos, Helsingin Yliopisto
> jussi.lehtola at helsinki.fi, p. 191 50632
> ------------------------------------------------------
> Mr. Jussi Lehtola, M. Sc., Doctoral Student
> Department of Physics, University of Helsinki, Finland
> jussi.lehtola at helsinki.fi
> ------------------------------------------------------
>
>
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>



More information about the gromacs.org_gmx-users mailing list