[gmx-users] Node assignment using openmpi for multiple simulations in the same submission script in PBS
himanshu khandelia
hkhandelia at gmail.com
Fri Nov 2 11:24:46 CET 2007
Hi,
I am requesting 2 4-cpu nodes on a cluster using PBS. I want to run a
separate GMX simulation on each 4-cpu node. However, on 2 nodes, the
speed for each simulation decreases (50 to 100%) if compared to a
simulation which runs in a job which requests only one node. I am
guessing this is because openmpi fails to assign all cpus of the same
node to one simulation ? Instead, cpus from different nodes are being
used to run simulation. This is what I have in the PBS script
1.
########
mpirun -np 4 /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi -np 4
-v -s $mol.tpr -o $mol.trr -c $mol.gro -e $mol -g $mol.log > &
$mol.out &
mpirun -np 4 /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi -np 4
-v -s $mol2.tpr -o $mol2.trr -c $mol2.gro -e $mol2 -g $mol2.log > &
$mol2.out &
wait
########
OPENMPI does have a mechanism whereby one can assign specific
processes to specific nodes
http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
So, I have also tried all of the following in the PBS script where the
--bynode or the --byslot option is used
2.
########
mpirun -np 4 --bynode
/people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
mpirun -np 4 --bynode
/people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
wait
########
3.
########
mpirun -np 4 --byslot /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
mpirun -np 4 --byslot /people/disk2/hkhandel/gromacs-3.3.1/bin/mdrun_mpi etc. &
wait
########
But these methods also result in similar performance losses.
So how does one assign the cpus properly using mpirun if running
different simulations in the same PBS job ??
Thank you for the help,
-Himanshu
More information about the gromacs.org_gmx-users
mailing list