[gmx-users] Node assignment using openmpi for multiple simulations in the same submission script in PBS
himanshu khandelia
hkhandelia at gmail.com
Fri Nov 2 15:49:11 CET 2007
Hi Mark,
I do not want to request two separate 1-node jobs, so that I can make
maximum use of the MAUI queue algorithm on our local cluster, which
sometimes favors jobs which utilize more resources.
I posted on this list, because someone here might have faced a similar
problem before, and because how openmpi allocates cpus also depends on
what options gromacs was originally compiled with (in connection to
MPI). So its not strictly a 100% openmpi question.
Thanks for the help
-Himanshu
On Nov 2, 2007 3:07 PM, Mark Abraham <Mark.Abraham at anu.edu.au> wrote:
> himanshu khandelia wrote:
> > Hi,
> >
> > I am requesting 2 4-cpu nodes on a cluster using PBS. I want to run a
> > separate GMX simulation on each 4-cpu node. However, on 2 nodes, the
> > speed for each simulation decreases (50 to 100%) if compared to a
> > simulation which runs in a job which requests only one node. I am
> > guessing this is because openmpi fails to assign all cpus of the same
> > node to one simulation ? Instead, cpus from different nodes are being
> > used to run simulation. This is what I have in the PBS script
>
> At risk of pointing out the obvious, why not use 2 independent requests
> for single 4-cpu nodes to do the job independently, or do the two jobs
> as two successive 8-cpu jobs?
>
>
> > 1.
> > ########
> > mpirun -np 4 /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi -np 4
> > -v -s $mol.tpr -o $mol.trr -c $mol.gro -e $mol -g $mol.log > &
> > $mol.out &
> > mpirun -np 4 /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi -np 4
> > -v -s $mol2.tpr -o $mol2.trr -c $mol2.gro -e $mol2 -g $mol2.log > &
> > $mol2.out &
> >
> > wait
> > ########
> >
> > OPENMPI does have a mechanism whereby one can assign specific
> > processes to specific nodes
> > http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
> > So, I have also tried all of the following in the PBS script where the
> > --bynode or the --byslot option is used
> >
> > 2.
> > ########
> > mpirun -np 4 --bynode
> > /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
> > mpirun -np 4 --bynode
> > /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
> > wait
> > ########
> >
> > 3.
> > ########
> > mpirun -np 4 --byslot /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
> > mpirun -np 4 --byslot /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
> > wait
> > ########
> >
> > But these methods also result in similar performance losses.
> >
> > So how does one assign the cpus properly using mpirun if running
> > different simulations in the same PBS job ??
>
> This problem has got nothing to do with GROMACS, and so the people
> you're asking here might well have no idea. You should probably be
> asking an OpenMPI mailing list, since your problem is "how do I
> allocation these 4-processor jobs to specific subsets of 4 processors",
> not "how do I make GROMACS jump through hoops" :-)
>
> Mark
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
More information about the gromacs.org_gmx-users
mailing list