[gmx-users] problem with LAM

Arneh Babakhani ababakha at mccammon.ucsd.edu
Tue Jun 27 07:24:15 CEST 2006


Hi, I was having this same problem.  I tried running like this:

/opt/mpich/intel/bin/mpirun -v -np $NPROC -machinefile \$TMPDIR/machines 
~/gromacs-mpi/bin/mdrun -np $NPROC -s $CONF -o $CONF -c After$CONF -e 
$CONF -g $CONF >& $CONF.job

mpirun with the -v option (I'm not exactly sure what that does, but it 
seemed to circumvent the problem).

But then I got another problem.  GMX seems to start up ok, but then 
stalls.  I just posted to the group about this, you should have it in 
your inbox soon.

If this helps, and you do get GMX running in parallel, please let me know.

thanks,

Arneh

Sridhar Acharya wrote:
> Hi All,
>
> I am facing a problem for parallel mdrun.
> I tried to run in 2 nodes with the following command. The program reports that lamd is not running as follows.
> ####################################################################################################
> mpirun -np 2  /users/soft/GromacsSingle/bin/mdrun_mpi -s b4em_1CYP_WT.tpr -o em_1CYP_WT.trr -np 2
> -----------------------------------------------------------------------------
> It seems that there is no lamd running on this host, which indicates
> that the LAM/MPI runtime environment is not operating.  The LAM/MPI
> runtime environment is necessary for MPI programs to run (the MPI
> program tired to invoke the "MPI_Init" function).
>
> Please run the "lamboot" command the start the LAM/MPI runtime
> environment.  See the LAM/MPI documentation for how to invoke
> "lamboot" across multiple machines.
> -----------------------------------------------------------------------------
> -----------------------------------------------------------------------------
> It seems that [at least] one of the processes that was started with
> mpirun did not invoke MPI_INIT before quitting (it is possible that
> more than one process did not invoke MPI_INIT -- mpirun was only
> notified of the first one, which was on node n0).
>
> mpirun can *only* be used with MPI programs (i.e., programs that
> invoke MPI_INIT and MPI_FINALIZE).  You can use the "lamexec" program
> to run non-MPI programs over the lambooted nodes.
> -----------------------------------------------------------------------------
> ##################################################################################################
>
> But lamd is very well running, because I could get the status of lam nodes with the "lamnodes" command.
> ########################################################################################
> [msridhar at cdfd-grid-node17 WT_SINGLE_PARALLEL]$ lamnodes
> n0      cdfd-grid-node2:1:
> n1      cdfd-grid-node4:1:
> n2      cdfd-grid-node12:1:
> n3      cdfd-grid-node13:1:
> n4      cdfd-grid-node14:1:
> n5      cdfd-grid-node16:1:
> n6      cdfd-grid-node17:1:origin,this_node
> ###########################################################################################
> Do I have to define any paths so that it could recognise this?
>
> Waiting for your suggessions.
>
> sridhar
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20060626/269de57d/attachment.html>


More information about the gromacs.org_gmx-users mailing list