[gmx-users] how to run parallel jobs in gromacs
Diego Nolasco
nolasco1980 at gmail.com
Tue Aug 7 14:39:24 CEST 2007
Did you booted your lam?
Go to a specific node of your cluster, for example node 3, and create a file
with a simple name like lamhosts. This file must contain just the nodes you
may want to boot (the name of the node, like no3, or something like this).
Than, run
lamboot lamhosts -v
If you have two processors in each node you may use -np 2, if you have four
-np 4.
2007/8/6, Anupam Nath Jha <anupam at mbu.iisc.ernet.in>:
>
>
> Now i am using this much only:
>
> #This is a PBS script for Parallel Jobs
> #!/bin/bash -f
> #PBS -o /home/anupam/MD/lyd
> #PBS -e /home/anupam/MD/lyd
> #PBS -q default
>
> cd /home/anupam/MD/lyd
>
> mpirun -np 4 /usr/local/gromacs/bin/mdrun_mpi -v -deffnm em
>
> but still it's giving this error:
> totalnum=2 numhosts=1
> there are not enough hosts on which to start all processes
>
>
>
>
>
>
>
> > Anupam Nath Jha wrote:
> >> Dear all
> >>
> >> i am new in gromacs. i have installed gromacs-3.3.3 in our cluster(mpi
> is
> >> already there) with parallel version (using the following command):
> >>
> >>
> >> make clean
> >> ./configure --enable-mpi --disable-nice --program-suffix="_mpi"
> >> make mdrun
> >> make install-mdrun
> >>
> >> it went fine.
> >>
> >> but when i ran the command -
> >> qsub pbs_submit
> >>
> >> it's the pbs_submit file:
> >>
> >> #This is a PBS script for Parallel Jobs
> >> #!/bin/bash -f
> >> #PBS -l nodes=2:ppn=2
> >> #PBS -o /home/anupam/MD/lyd
> >> #PBS -e /home/anupam/MD/lyd
> >
> > These latter two should probably be file names, not directory names...
> >
> >> #PBS -q default
> >>
> >> cd /home/anupam/MD/lyd
> >
> > ... since this seems to be a directory? Check out the #PBS -wd option
> too...
> >
> >> mpirun grompp -f em.mdp -p topol.top -c solvated.gro -np 4 -o em.tpr
> >
> > grompp is not an MPI code, run it without mpirun
> >
> >> mpirun mdrun_mpi -v -deffnm -np 4 em
> >
> > Unless you have a PBS-customized install of mpirun, you should use
> > "mpirun -np 4 mdrun_mpi -v deffnm em" since mdrun_mpi will pick up the
> > four processors from the environment, but mpirun won't.
> >
> >> but it's not doing anything,except writing this:
> >>
> >> totalnum=3 numhosts=2
> >> there are not enough hosts on which to start all processes
> >> totalnum=3 numhosts=2
> >> there are not enough hosts on which to start all processes
> >
> > First get grompp working interactively, then get grompp working in a
> > script, (neither of which need MPI), and only then worry about mpirun
> > -np 4 mdrun
> >
> > Mark
> > _______________________________________________
> > gmx-users mailing list gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before
> posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> >
> > --
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
> >
> >
>
>
> --
> Science is facts; just as houses are made of stone, so is science is made
> of
> facts; but a pile of stones is not a house, and a collection of facts is
> not
> necessarily sciece.
>
> Anupam Nath Jha
> Ph. D. Student
> Saraswathi Vishveshwara Lab
> Molecular Biophysics Unit
> IISc,Bangalore-560012
> Karnataka
> Ph. no.-22932611
>
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20070807/27a753d4/attachment.html>
More information about the gromacs.org_gmx-users
mailing list