[gmx-users] Attempting to scale gromacs mdrun_mpi
NG HUI WEN
HuiWen.Ng at nottingham.edu.my
Mon Aug 23 17:15:55 CEST 2010
I have been playing with the "mdrun_mpi" command in gromacs 4.0.7 to try out parallel processing. Unfortunately, the results I got did not show any significant improvement in simulation time.
Below is the command I issued:
mpirun -np x mdrun_mpi -deffnm
where x is the number of processors being used.
>From the machine output, it seemed that the work had indeed been distributed to multiple processors e.g. -np 10:
NNODES=10, MYRANK=5, HOSTNAME=beowulf
NNODES=10, MYRANK=1, HOSTNAME=beowulf
NNODES=10, MYRANK=2, HOSTNAME=beowulf
NNODES=10, MYRANK=3, HOSTNAME=beowulf
NNODES=10, MYRANK=7, HOSTNAME=beowulf
NNODES=10, MYRANK=8, HOSTNAME=beowulf
Making 2D domain decomposition 5 x 1 x 2
starting mdrun 'PROTEIN'
1000 steps, 2.0 ps.
The simulation system consists of 100581 atoms, the duration is 2ps (1000 steps). results obtained are as followed:
number of CPUs Simulation time
Significant improvement in simulation time was only observed from -np 1 to 2. As almost all (except -np = 1) complaint about load imbalance and PP:PME imbalance (the latter was seen especially in those with larger -np value), I tried to increase the pme nodes by adding a -npme flag and entered a bigger number but the results either showed no improvement or worsened.
As I am new to gromacs, there might be some things that I'd missed out/done incorrectly. Would really appreciate some input to this. Many thanks in advance!!
<< Email has been scanned for viruses by UNMC email management service >>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gromacs.org_gmx-users