[gmx-users] Gromacs 4 with mpi interface

Jussi Lehtola jussi.lehtola at helsinki.fi
Thu Dec 18 17:16:12 CET 2008


On Thu, 2008-12-18 at 21:23 +0530, Manik Mayur wrote:
> I am not sure that mdrun_mpi or for that matter mdrun with options -np
> 2 -multi 1 (I have a core 2 duo machine) is actually running the
> process parallely, as the estimated time for completion for a
> simulation is the same as without.

You don't need to supply the -np option to mdrun or grompp anymore, just
run with

mpirun -np 2 g_mdrun_mpi (options)

> I have built open-mpi library, so do I have to make some changes at
> its configuration level(even if I am using single machine with
> multiple core processor)? Like adding localhost and the no. of nodes.
> If yes, then can anybody help me with that.

Are you using some other compiler than gcc? If not, there's no need to
build anything yourself, just use the distribution provided versions.

If you're running on a single node, you don't have to do anything, just
run the MPI binary with mdrun.

> Also, for the information, mdrum_mpi or mdrun with relevant options
> shows NNODES=1. why?

Because if you aren't running with mpirun -np the binary runs with only
one core.

And if you run a non-mpi binary with mpirun -np, you end up starting
multiple copies of the simulation instead of a distributed one.
-- 
------------------------------------------------------
Jussi Lehtola, FM, Tohtorikoulutettava
Fysiikan laitos, Helsingin Yliopisto
jussi.lehtola at helsinki.fi, p. 191 50632
------------------------------------------------------
Mr. Jussi Lehtola, M. Sc., Doctoral Student
Department of Physics, University of Helsinki, Finland
jussi.lehtola at helsinki.fi
------------------------------------------------------




More information about the gromacs.org_gmx-users mailing list