[gmx-users] Node assignment using openmpi for multiple simulations in the same submission script in PBS

Mark Abraham Mark.Abraham at anu.edu.au
Fri Nov 2 15:07:36 CET 2007


himanshu khandelia wrote:
> Hi,
> 
> I am requesting 2 4-cpu nodes on a cluster using PBS. I want to run a
> separate GMX simulation on each 4-cpu node. However, on 2 nodes, the
> speed for each simulation decreases (50 to 100%) if compared to a
> simulation which runs in a job which requests only one node. I am
> guessing this is because openmpi fails to assign all cpus of the same
> node to one simulation ? Instead, cpus from different nodes are being
> used to run simulation. This is what I have in the PBS script

At risk of pointing out the obvious, why not use 2 independent requests 
for single 4-cpu nodes to do the job independently, or do the two jobs 
as two successive 8-cpu jobs?

> 1.
> ########
> mpirun -np 4  /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi -np 4
> -v -s $mol.tpr -o $mol.trr -c $mol.gro -e $mol -g $mol.log > &
> $mol.out &
> mpirun -np 4  /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi -np 4
> -v -s $mol2.tpr -o $mol2.trr -c $mol2.gro -e $mol2 -g $mol2.log > &
> $mol2.out &
> 
> wait
> ########
> 
> OPENMPI does have a mechanism whereby one can assign specific
> processes to specific nodes
> http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
> So, I have also tried all of the following in the PBS script where the
> --bynode or the --byslot option is used
> 
> 2.
> ########
> mpirun -np 4  --bynode
> /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc.  &
> mpirun -np 4  --bynode
> /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc.  &
> wait
> ########
> 
> 3.
> ########
> mpirun -np 4  --byslot /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
> mpirun -np 4  --byslot /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
> wait
> ########
> 
> But these methods also result in similar performance losses.
> 
> So how does one assign the cpus properly using mpirun if running
> different simulations in the same PBS job  ??

This problem has got nothing to do with GROMACS, and so the people 
you're asking here might well have no idea. You should probably be 
asking an OpenMPI mailing list, since your problem is "how do I 
allocation these 4-processor jobs to specific subsets of 4 processors", 
not "how do I make GROMACS jump through hoops" :-)

Mark



More information about the gromacs.org_gmx-users mailing list