[gmx-users] Domain decomposition error, is mdrun_mpi now obsolete?

Peter C. Lai pcl at uab.edu
Sun Sep 23 00:27:15 CEST 2012


There are 2 ways to build a parallel gromacs. For shared memory clusters you
still need to use MPI, but if all your cores are on one host, you can build
it without MPI which will make an mdrun that uses threading.

Domain Decomposition is tied in with the size of the simulation and PME, so 
it's not anything to do with MPI. If your simulation is too small, there 
isn't enough volume to slice into domains larger than the minimum cell size
(you need at least 1 chargegroup/particle and its group of neighbors per 
domain). Since you started off by using MPI, you told mpirun and mdrun to
use 6 CPUs so it tried to, then couldn't, so it gave you an error :)
(although this happens with the threaded version too - mdrun defaults to the
highest number of processors specified by the command, then determines the DD
matrix, then tries to chunk out the simulation to fit that matrix).

On 2012-09-22 03:05:23PM -0700, Ladasky wrote:
> Hello again everyone,
> 
> I'm currently running GROMACS 4.5.4 on Ubuntu Linux 11.10.  I'm trying to
> clean up my simulation conditions.  Many of my MDP files are hold-overs from
> earlier versions of GROMACS, as far back as v. 3.3.  I have written some
> shell scripts which should handle this work automatically -- that is, as
> long as I get no errors.
> 
> I have a six-core CPU, and my scripts invoke mdrun_mpi to take advantage of
> the parallel processors.
> 
> While doing my cleanup work, I just got my first domain decomposition error
> message:
> 
> http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
> 
> Of course, I'll need to work on fixing that error -- though why GROMACS
> would complain about having too many CPUs at its disposal, rather than just
> running with fewer CPUs, is a bit of a mystery to me.
> 
> Reading through the comments at that link, I surmise that I may no longer
> need to download and build a separate MPI package, that multiprocessing is
> the default behavior of mdrun.  Is that correct?
> 
> 
> 
> 
> --
> View this message in context: http://gromacs.5086.n6.nabble.com/Domain-decomposition-error-is-mdrun-mpi-now-obsolete-tp5001240.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> -- 
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
==================================================================
Peter C. Lai			| University of Alabama-Birmingham
Programmer/Analyst		| KAUL 752A
Genetics, Div. of Research	| 705 South 20th Street
pcl at uab.edu			| Birmingham AL 35294-4461
(205) 690-0808			|
==================================================================




More information about the gromacs.org_gmx-users mailing list