[gmx-developers] Problem submitting GROMACS script
rainy908 at yahoo.com
rainy908 at yahoo.com
Sat Aug 1 00:08:43 CEST 2009
Justin,
You are correct about my attempt to run grompp using MPIRUN - it doesn't
work. Actually I realized that I was using a version of Gromacs that wasn't
compiled for MPI! Gromacs-3.3.1-dev is compiled for MPI, however. The
submission script that worked is as follows:
# Define locations of MPIRUN, MDRUN
MPIRUN=/usr/local/topspin/mpi/mpich/bin/mpirun
MDRUN=/share/apps/gromacs-3.3.1-dev/bin/mdrun
cd /nas2/lpeng/nexil/gromacs/cg_setup
# Run MD
$MDRUN -v -nice 0 -np $NSLOTS -s md3.tpr -o md3.trr -c confout.gro -g
md3.log -x md3.xtc
..This was carried out after I ran grompp on a single node.
2009/7/31 Justin A. Lemkul <jalemkul at vt.edu>
>
>
> rainy908 at yahoo.com wrote:
>
>> Hi,
>>
>> I'm trying to carry out an MD simulation on 4 processors but am having
>> trouble with the MPI ("Can't read MPIRUN_HOST"). My error is:
>>
>> -catch_rsh
>> /opt/gridengine/default/spool/compute-0-32/active_jobs/176501.1/pe_hostfile
>> compute-0-32
>> compute-0-32
>> compute-0-32
>> compute-0-32
>> Got 4 slots.
>> compute-0-32
>> compute-0-32
>> compute-0-32
>> compute-0-32
>> OSU MVAPICH VERSION 0.9.5-SingleRail
>> Can't read MPIRUN_HOST
>> # Cleaning local host (compute-0-32) for lpeng (lpeng)... done.
>>
>>
>> My submission script is:
>>
>> # Define locations of MPIRUN, MDRUN, and grompp
>> MPIRUN=/usr/local/topspin/mpi/mpich/bin/mpirun
>> MDRUN=/share/apps/gromacs-3.3.1/bin/mdrun-mpi
>> grompp=/share/apps/gromacs-3.3.1/bin/grompp
>>
>> cd /nas2/lpeng/nexil/gromacs/cg_setup
>>
>> # Running Gromacs: read TPR and write output to /nas2 disk
>> $MPIRUN -v -machinefile $TMPDIR/machines -np $NSLOTS $grompp -f md.mdp -c
>> confout.gro -o md3.tpr -n index.ndx
>>
>> # Run MD
>> $MDRUN -v -nice -np $NSLOTS -s md3.tpr -o md3.trr -c confout.gro -g
>> md3.log -x md3.xtc
>>
>>
> The problem could be that you're trying to run grompp using mpirun (which
> you shouldn't), and then trying to run a parallel job with mdrun without
> invoking mpirun.
>
> Probably something like:
>
> $grompp -f md.mdp -c confout.gro -o md3.tpr -n index.ndx
>
> $MPIRUN -v -machinefile $TMPDIR/machines -np $NSLOTS $MDRUN -v -nice -np
> $NSLOTS -s md3.tpr -o md3.trr -c confout.gro -g md3.log -x md3.xtc
>
> is more appropriate. Without knowing the specifics of your system, a guess
> is about all I can give.
>
> -Justin
>
>
>> Does anyone have any insight on what the problem could be? Is the problem
>> with the MPI or the submission script?
>>
>> Thanks,
>> Lili
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-developers-request at gromacs.org.
>>
>
> --
> ========================================
>
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> ========================================
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20090731/4879abf7/attachment.html>
More information about the gromacs.org_gmx-developers
mailing list