[gmx-users] mdrun vs mdrun_mpi

Mark Abraham mark.abraham at anu.edu.au
Sun Jul 9 17:21:09 CEST 2006

>     I am puzzled by the exact differences between MPI runs of gromacs
> using the mdrun and mdrun_mpi binaries.

One will run calculations in parallel, the other will not :-)

> For example, I can use...
> grompp -np 1 ...
> mpirun n0 -c 2 mdrun ...
> To run a MD simulation on a single node with two processors. Both
> cpus appear to be fully utilized.

This is like telling two employees to do the same piece work, but not
telling them of the other's existence. They both look to be doing work in
the normal manner, but all you've done is waste the time of one of them.
In order to do achieve a speed-up, you need to tell them the other exists
and they should cooperate.

> What I don't understand is the
> difference between the mdrun and mdrun_mpi binaries. I was originally
> under the impression that mdrun_mpi might eliminate the need to use
> mpirun to start the MPI md calculations. However I have never been
> able to get that to work. So what exactly would...
> mpirun n0 -c 2 mdrun_mpi...
> do differently from...
> mpirun n0 -c 2 mdrun
> ...since the latter seems to be fully utilizing both processors
> already.

In the former the employees in my analogy have been told of each other and
can talk to each other to cooperate. Unless the job is dominated by setup
time, you should notice the former take much less time than the latter,
depending on the scaling properties of the particular algorithm. You may
also notice both a output.log file a backup #output.log.1# file in the
latter case, but not the former. The reason for this should be clear :-)


More information about the gromacs.org_gmx-users mailing list