[gmx-users] domain decomposition and load balancing
Mark Abraham
mark.abraham at anu.edu.au
Fri Feb 19 23:27:31 CET 2010
----- Original Message -----
From: Amit Choubey <kgp.amit at gmail.com>
Date: Saturday, February 20, 2010 8:51
Subject: [gmx-users] domain decomposition and load balancing
To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Hi Everyone,
> I am trying to run a simulation with the option "pbc=xy" turned on. I am using 64 processors for the simulation. The mdrun_mpi evokes the following error message before starting the md steps
>
> There is no domain decomposition for 64 nodes that is compatible with the given box and a minimum cell size of 0.889862 nm> Change the number of nodes or mdrun option -rdd or -dds>
Look in the log file for details on the domain decomposition>
> This has to do with the load balancing in the domain decomposition version of mdrun. Can anyone suggest me how to set the option -rdd or -dds?
Those options are not normally the problem - but see the log file for info and mdrun -h for instructions.
You should read up in the manual about domain decomposition, and see about choosing npme such that 64-npme is a number that is suitably composite that you can make a reasonably compact 3D grid so that the minimum cell size is not a constraint. Cells have to be large enough that all nonbonded interactions can be resolved in consultation with at most nearest-neighbour cells (and some other constraints).
I'm assuming pbc=xy requires a 2D DD. For example, npme=19 gives npp=45 gives 9x5x1, but npme=28 gives npp=36 gives 6x6x1, which allows for the cells to have the smallest diameter possible. Of course if your simulation box is so small that the 2D DD for pbc=xy will always lead to slabs that are too small in one dimension then you can't solve this problem with DD.
If pbc=xy permits a 3D DD, then the same considerations apply. npme=19 gives 5x3x3 but npme=28 allows 4x3x3
> Also the simulation runs fine on one node (with domain decomposition) and with particle decomposition but both of them extremely slow.>
Well, that's normal...
Mark
More information about the gromacs.org_gmx-users
mailing list