[gmx-users] Problem in mdrun_mpi using MPICH

David van der Spoel spoel at xray.bmc.uu.se
Wed Oct 19 17:13:06 CEST 2005


On Wed, 2005-10-19 at 20:35 +0530, Alok wrote:
> Hello Dr. David,
>                         Thanks for your kind reply. Can you guide me how 
> can I compile it with MPICH, previously I used --enable mpi  flag for 
> both fftw & groamacs 3.3.
it's on the website.


> Thanking you,
> Best Regards,
> Alok
> 
> David van der Spoel wrote:
> 
> >On Wed, 2005-10-19 at 18:55 +0530, Alok wrote:
> >  
> >
> >>Greeting to All,
> >>
> >>sorry for very silly question.I have very limited knowledge of parallel
> >>computing.
> >>
> >>I am trying to install parallel version of gromacs 3.3 in SUN Cluster.I
> >>compiled  fftw,binutis & gromacs 3.3 without any error.
> >>
> >>MPICH was already installed on the SUN Cluster.
> >>    
> >>
> >
> >But you compiled it with LAM.
> >
> >That won't work. 
> >
> >  
> >
> >>when i run grompp_mpi for 2 node it ran fine and generated the *.tpr file
> >>for two processors.
> >>
> >>But i tried to  mdrun_mpi -h it gave following message.
> >>
> >>-----------------------------------------------------------------------------
> >>It seems that there is no lamd running on this host, which indicates
> >>that the LAM/MPI runtime environment is not operating.  The LAM/MPI
> >>runtime environment is necessary for MPI programs to run (the MPI
> >>program tired to invoke the "MPI_Init" function).
> >>
> >>Please run the "lamboot" command the start the LAM/MPI runtime
> >>environment.  See the LAM/MPI documentation for how to invoke
> >>"lamboot" across multiple machines.
> >>-----------------------------------------------------------------------------
> >>
> >>
> >>
> >>Due to boundation of out computer center rules i could not use the
> >>"lamboot" command. I have to submit the job by using following script
> >>(provided by our system administrator)
> >>
> >>
> >>
> >>
> >>---------------------------------------------------------------------------
> >>
> >>#!/bin/csh -f
> >>#
> >>#
> >># (c) 2004 Sun Microsystems, Inc. Use is subject to license terms.
> >>
> >># ---------------------------
> >># our name
> >>#$ -N MPI_Job
> >>#
> >># pe request
> >>#$ -pe mpich* 2-20
> >>#
> >># MPIR_HOME from submitting environment
> >>#$ -v MPIR_HOME=/opt/mpichdefault-1.2.6
> >># ---------------------------
> >>
> >>#
> >># needs in
> >>#   $NSLOTS
> >>#       the number of tasks to be used
> >>#   $TMPDIR/machines
> >>#       a valid machine file to be passed to mpirun
> >>
> >>echo "Got $NSLOTS slots."
> >>
> >># enables $TMPDIR/rsh to catch rsh calls if available
> >>set path=($TMPDIR $path)
> >>
> >>$MPIR_HOME/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines -nolocal *
> >>
> >>---------------------------------------------------------------------------
> >>
> >>
> >>and start the job by using following command.
> >>qsub -pe mpichpar <no of nodes> -q par.q <script_name>
> >>
> >>where * is Path of the input file including parallel program name (in my
> >>case it is /users/alokjain/mdrun_mpi -s full.tpr -o full -x full -c
> >>after_full -e full -g full)
> >>
> >>but it is not recognizing the mdrun_mpi command.(I already set the path)
> >>
> >>can some one help me to overcome this problem?
> >>Is there some problem in installation or in the script?
> >>
> >>Thanking you,
> >>Best regards,
> >>Alok Jain
> >>
> >>_______________________________________________
> >>gmx-users mailing list
> >>gmx-users at gromacs.org
> >>http://www.gromacs.org/mailman/listinfo/gmx-users
> >>Please don't post (un)subscribe requests to the list. Use the 
> >>www interface or send it to gmx-users-request at gromacs.org.
> >>    
> >>
> 
-- 
David.
________________________________________________________________________
David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,          75124 Uppsala, Sweden
phone:  46 18 471 4205          fax: 46 18 511 755
spoel at xray.bmc.uu.se    spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++





More information about the gromacs.org_gmx-users mailing list