[gmx-users] Domain decomposition error, is mdrun_mpi now obsolete?

Justin Lemkul jalemkul at vt.edu
Sun Sep 23 00:08:38 CEST 2012

On 9/22/12 6:05 PM, Ladasky wrote:
> Hello again everyone,
> I'm currently running GROMACS 4.5.4 on Ubuntu Linux 11.10.  I'm trying to
> clean up my simulation conditions.  Many of my MDP files are hold-overs from
> earlier versions of GROMACS, as far back as v. 3.3.  I have written some
> shell scripts which should handle this work automatically -- that is, as
> long as I get no errors.
> I have a six-core CPU, and my scripts invoke mdrun_mpi to take advantage of
> the parallel processors.
> While doing my cleanup work, I just got my first domain decomposition error
> message:
> http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
> Of course, I'll need to work on fixing that error -- though why GROMACS
> would complain about having too many CPUs at its disposal, rather than just
> running with fewer CPUs, is a bit of a mystery to me.

One cannot decompose a system across any arbitrary number of processors.  Rule 
of thumb is at least several hundred atoms per processor.  There are algorithmic 
constraints and performance reasons for this.

> Reading through the comments at that link, I surmise that I may no longer
> need to download and build a separate MPI package, that multiprocessing is
> the default behavior of mdrun.  Is that correct?

There are two methods for parallelization, threading and MPI.  Depending on the 
setup of the system, one or the other can be used, but they are mutually 
exclusive.  For a multi-core workstation, threading is straightforward and does 
not require external MPI libraries, but one can certainly compile and 
MPI-enabled mdrun if desired.



Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080


More information about the gromacs.org_gmx-users mailing list