[gmx-users] The optimal PME mesh load for parallel simulations is below 0.5
Justin Lemkul
jalemkul at vt.edu
Wed Sep 4 16:09:18 CEST 2013
On 9/4/13 10:03 AM, Steven Neumann wrote:
> DEa Users,
>
> My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I want
> to use PME in my mdp:
>
> rcoulomb = 2.0
> coulombtype = PME
> pme_order = 4
> fourierspacing = 0.12
>
> The cutoff needs to stay like this, I have my own tables with VDW, bonds,
> angles and dihedrals.
>
> i got the NOTE:
>
> The optimal PME mesh load for parallel simulations is below 0.5
> and for highly parallel simulations between 0.25 and 0.33,
> for higher performance, increase the cut-off and the PME grid spacing
>
> what setting would you suggest to use on 8 CPUs?
>
I would suggest not using PME :) The problem is PME is extremely inefficient in
vacuo because it spends a lot of time doing nothing due to the empty space.
Moreover, you're not likely really simulating in vacuo at that point because
you've got PBC and therefore are really doing a simulation in more of a diffuse
crystal environment, so there are probably artifacts.
-Justin
--
==================================================
Justin A. Lemkul, Ph.D.
Postdoctoral Fellow
Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201
jalemkul at outerbanks.umaryland.edu | (410) 706-7441
==================================================
More information about the gromacs.org_gmx-users
mailing list