[gmx-users] no domain decomposition

Justin Lemkul jalemkul at vt.edu
Wed Jun 25 13:30:24 CEST 2014

On 6/25/14, 6:25 AM, Chetan Mahajan wrote:
> Hello,
> I have 10910 atoms ( 2160 of TiO2, 5 of sodium and formate, rest water) in
> my system. I am getting following error:
> Fatal error:
> There is no domain decomposition for 16 nodes that is compatible with the
> given box and a minimum cell size of 1.81239 nm
> Change the number of nodes or mdrun option -rdd
>   Following is an excerpt of log file:
> Initializing Domain Decomposition on 16 nodes
> Dynamic load balancing: no
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
>      two-body bonded interactions: 1.648 nm, Bond, atoms 2161 2163
>    multi-body bonded interactions: 1.648 nm, Angle, atoms 2161 2163
> Minimum cell size due to bonded interactions: 1.812 nm
> Using 0 separate PME nodes, as there are too few total
>   nodes for efficient splitting
> Optimizing the DD grid for 16 cells with a minimum initial size of 1.812 nm
> The maximum allowed number of cells is: X 1 Y 1 Z 6
> *I tried decreasing number of nodes from 48 to 16, but this error appears
> always. 2161 and 2163 atoms mentioned in log file excerpt above are formate
> atoms each of which is in a different charge group. I can't put them in
> same charge group, since they are different energy groups as we can see. *
> *Any suggestions how to interpret log file excerpt or what more can be
> done?*

This gets discussed constantly (hint: Google the error and you will turn up a 
huge number of posts).  You can't arbitrarily parallelize across any number of 
nodes.  There are limits based on system size and settings.



Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul at outerbanks.umaryland.edu | (410) 706-7441


More information about the gromacs.org_gmx-users mailing list