[gmx-users] The optimal PME mesh load for parallel simulations is below 0.5

Steven Neumann s.neumann08 at gmail.com
Wed Sep 4 16:18:06 CEST 2013


Thank you. i am using my own vdw tables so need a cut off.




On Wed, Sep 4, 2013 at 3:13 PM, Justin Lemkul <jalemkul at vt.edu> wrote:

>
>
> On 9/4/13 10:11 AM, Steven Neumann wrote:
>
>> Thank you! Would you suggest just a cut-off for coulmb?
>>
>>
> Not a finite one.  The best in vacuo settings are:
>
> pbc = no
> rlist = 0
> rvdw = 0
> rcoulomb = 0
> nstlist = 0
> vdwtype = cutoff
> coulombtype = cutoff
>
> -Justin
>
>  On Wed, Sep 4, 2013 at 3:09 PM, Justin Lemkul <jalemkul at vt.edu> wrote:
>>
>>
>>>
>>> On 9/4/13 10:03 AM, Steven Neumann wrote:
>>>
>>>  DEa Users,
>>>>
>>>> My system involves protein in vacuum - 80 atoms in box of 9x9x9 nm3. I
>>>> want
>>>> to use PME in my mdp:
>>>>
>>>> rcoulomb         = 2.0
>>>> coulombtype          = PME
>>>> pme_order            = 4
>>>> fourierspacing       = 0.12
>>>>
>>>> The cutoff needs to stay like this, I have my own tables with VDW,
>>>> bonds,
>>>> angles and dihedrals.
>>>>
>>>> i got the NOTE:
>>>>
>>>> The optimal PME mesh load for parallel simulations is below 0.5
>>>>     and for highly parallel simulations between 0.25 and 0.33,
>>>>     for higher performance, increase the cut-off and the PME grid
>>>> spacing
>>>>
>>>> what setting would you suggest to use on 8 CPUs?
>>>>
>>>>
>>>>  I would suggest not using PME :)  The problem is PME is extremely
>>> inefficient in vacuo because it spends a lot of time doing nothing due to
>>> the empty space. Moreover, you're not likely really simulating in vacuo
>>> at
>>> that point because you've got PBC and therefore are really doing a
>>> simulation in more of a diffuse crystal environment, so there are
>>> probably
>>> artifacts.
>>>
>>> -Justin
>>>
>>> --
>>> ==============================****====================
>>>
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Postdoctoral Fellow
>>>
>>> Department of Pharmaceutical Sciences
>>> School of Pharmacy
>>> Health Sciences Facility II, Room 601
>>> University of Maryland, Baltimore
>>> 20 Penn St.
>>> Baltimore, MD 21201
>>>
>>> jalemkul at outerbanks.umaryland.****edu <jalemkul at outerbanks.**
>>> umaryland.edu <jalemkul at outerbanks.umaryland.edu>> | (410)
>>> 706-7441
>>>
>>> ==============================****====================
>>>
>>> --
>>> gmx-users mailing list    gmx-users at gromacs.org
>>> http://lists.gromacs.org/****mailman/listinfo/gmx-users<http://lists.gromacs.org/**mailman/listinfo/gmx-users>
>>> <htt**p://lists.gromacs.org/mailman/**listinfo/gmx-users<http://lists.gromacs.org/mailman/listinfo/gmx-users>
>>> >
>>> * Please search the archive at http://www.gromacs.org/**
>>> Support/Mailing_Lists/Search<h**ttp://www.gromacs.org/Support/**
>>> Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>>before
>>> posting!
>>>
>>> * Please don't post (un)subscribe requests to the list. Use the www
>>> interface or send it to gmx-users-request at gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/****Support/Mailing_Lists<http://www.gromacs.org/**Support/Mailing_Lists>
>>> <http://**www.gromacs.org/Support/**Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
>>> >
>>>
>>>
> --
> ==============================**====================
>
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul at outerbanks.umaryland.**edu <jalemkul at outerbanks.umaryland.edu> | (410)
> 706-7441
>
> ==============================**====================
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users<http://lists.gromacs.org/mailman/listinfo/gmx-users>
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
>



More information about the gromacs.org_gmx-users mailing list