[gmx-users] MD vs. free energy simulations

Justin Lemkul jalemkul at vt.edu
Thu Aug 29 13:57:24 CEST 2013



On 8/29/13 2:24 AM, Jernej Zidar wrote:
> Hi,
>    I ran some MD simulations (NPT ensemble) and a series of simulations
> to determine the free energy of water solvation of a not to big
> molecule.
>
>    I noticed that while I was able to run the MD simulations using all
> the CPUs (or threads) in my workstation (12 CPUs or 24 threads,
> respectively), whereas during free energy I can use 2 CPUs at most. If
> I try to use more, the simulations would crash stating:
> Program mdrun, VERSION 4.6.3
> Source code file: /home/zidar/utils/gromacs-4.6.3/src/mdlib/domdec.c, line: 6792
>
> Fatal error:
> There is no domain decomposition for 4 nodes that is compatible with
> the given box and a minimum cell size of 4.52667 nm
> Change the number of nodes or mdrun option -rdd or -dds
> Look in the log file for details on the domain decomposition
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
> - - - -
>
>    I can use more CPUs only if I switch from domain decomposition to
> particle decomposition scheme. The size of the systems evaluated is
> 11.92513nm  x  5.44212nm  x 5.35234nm with ~30.000 atoms, so I assume
> the size of the system is not an issue.
>
>    Big question: Why is that so? Why can I use more CPUs for 'regular'
> MD but only two for free energy simulations?
>

This issue is discussed very frequently, particularly in the context of free 
energy calculations.  Please refer to the list archive and consider the effect 
that couple_intramol setting has on the DD setup.

-Justin

-- 
==================================================

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul at outerbanks.umaryland.edu | (410) 706-7441

==================================================



More information about the gromacs.org_gmx-users mailing list