[gmx-users] Shell (Drude) model for polarization in GROMACS
jalemkul at vt.edu
Tue Jun 26 16:41:00 CEST 2018
On 6/26/18 10:37 AM, Eric Smoll wrote:
> In a previous thread on core/shell optimization (
> you state:
> "No, I've made wholesale changes to the code in the way that energies and
> are computed. The current code has bugs. If you want a modified version
> software, so caveat emptor), contact me off-list."
> Is this still true? Is core/shell optimization broken in GROMACS 2018.1?
> If so, would you mind sharing your Drude implementation?
Check out git master and switch to the drude branch. Don't try to use
domain decomposition, only OpenMP for parallelization. Documentation is
unfortunately somewhat sparse but the code is reasonably commented.
Note that everything I have done is for extended Lagrangian dynamics, I
haven't tested much with massless shells or use of [polarization]
> On Mon, Jun 18, 2018 at 1:14 PM, Justin Lemkul <jalemkul at vt.edu> wrote:
>> On 6/18/18 4:05 PM, Eric Smoll wrote:
>>> Thank you so much for the rapid and clear reply! Sorry to ask for a bit
>>> more clarification.
>>> The thole_polarization isn't in the manual at all. Is it structured the
>>> same way as the [ polarization ] directive in the manual:
>>> [ thole_polarization ]
>>> ; Atom i j type alpha
>>> 1 2 1 0.001
>>> If I want Thole corrections, am I correct in assuming that I should list
>>> *all shells* in the system under this thole_polarization directive with
>>> you pointed out) "i" or "j" as the shell? If "i" is the shell, "j" is the
>>> core. If "j" is the shell, "i" is the core.
>> You have to list everything explicitly, including the shielding factor,
>> and between dipole pairs (Thole isn't between an atom and its shell, it's
>> between neighboring dipoles). I honestly don't know what the format is; I
>> completely re-wrote the Thole code for our Drude implementation (still not
>> officially incorporated into a release due to DD issues, but we're close to
>> a fix...)
>> The code for "init_shell_flexcon" was very helpful. Thank you!
>>> nstcalcenergy must be set to 1. The code says that domain decomposition
>>> is not supported so multi-node MPI calculations are not allowed. I can
>>> still use an MPI-enabled GROMACS executable on a single node for shell MD,
>>> correct? Thread parallelization is still permitted, correct?
>> Presumably you're limited to OpenMPI, but again I have no idea about this
>> code. I've never actually used it.
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Virginia Tech Department of Biochemistry
>> 303 Engel Hall
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>> jalemkul at vt.edu | (540) 231-3129
>> Gromacs Users mailing list
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
Justin A. Lemkul, Ph.D.
Virginia Tech Department of Biochemistry
303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061
jalemkul at vt.edu | (540) 231-3129
More information about the gromacs.org_gmx-users