[gmx-users] System almost blowing up

Dayhoff, Guy gdayhoff at health.usf.edu
Tue Apr 4 15:31:12 CEST 2017


> 
> On 4/3/17 2:33 PM, Dayhoff, Guy wrote:
>> 
>>>> Hi,
>>>> 
>>>>  I?m receiving a couple (~15) EM did not converge warnings as well as a few (~3) 1,4 interaction
>>>> warnings during my run. It looks like it starts down the path to blowing up but recovers. Is this ?recovery?
>>>> system dependent? Should I take these messages (even without the subsequent crash) to indicate an
>>>> underlying issue with my system/topology or equilibration? I?m not ignoring or circumventing any warning
>>>> during pdb2gmx, or any other commands.
>>>> 
>>>> For more context: I?m running the Drude-2013 forcefield, with a dt of .5fs emtol of 1.0 and niter of 150
>>>> using the V-rescale thermostat and Berendsen pressure coupling while my systems cell shape relaxes
>>>> as a continuation from position restrained NVT then NPT ensembles. The starting structures were minimized
>>>> in vacuum, then solvated and minimized once again prior to the posres equilibration.
>>>> 
>>> 
>>> Such a short dt and strict niter should not be necessary in practice.  The
>>> failure of SCF to converge/LINCS warnings is what we typically refer to as
>>> polarization catastrophe, so your system is on the brink of instability.  This
>>> is one of the inherent problems of SCF in polarizable systems; it often fails to
>>> converge and leaves the system potentially unstable.  The reflective hardwall is
>>> much more reliable (and faster).  If the instability is happening in your
>>> equilibration, that may be OK but it is something you should try to troubleshoot
>>> to make sure there's nothing else going wrong.
>>> 
>>> -Justin
>>> 
>>> --
>>> 
>> 
>> Got it. I have been spending some time investigating the issue i?m having and
>> after reading shellfc.cpp I was pretty confident the issue was with SCF not converging,
>> and so my niter and dt were an attempt at looking into that.
>> 
>> The workflow i?ve been trying to use is?
>> EM -> Solvate -> EM -> posres NVT V-rescale -> posres berendsen NPT isotropic 1bar w/ V-rescale tcoupl ->
>> berendsen NPT semi-isotropic various pressures w/ V-rescale tcoupl
>> 
>> then once my cell shape has reached an equilibrium I finally switch to NH thermostat and PR barostat for
>> the production run.
>> 
>> If I switch to employing the hardwall (and the lagrangian dynamics for drudes) then I understand it is not
>> compatible yet with NPT ensembles. So, if I change my work flow as follows would it be appropriate still?
>> 
>> EM->Solvate->EM->posre NPT isotropic 1 bar w/ V-rescale tcoupl->NPT semi-isotropic varying pressures
>> w V-rescale tcoupl until cell shape is equilibrated using PR barostat, then switch to NVT without pcoupl
>> to use the hardwall during my production run?
>> 
>> For some reason, i?ve got this idea that I need to be tcoupling and pcoupling during my production runs
>> so i?ve been apprehensive to do this.
>> 
> 
> If you want an NPT ensemble during dynamics you're going to have to use NAMD, 
> CHARMM, or OpenMM.  I haven't coded the barostat yet (fixing the DD bugs has 
> been more important).  You can equilibrate with SCF under NPT and get a suitable 
> density, then switch to NVT, but your use of semi-isotropic coupling suggests a 
> membrane system?  In that case, I would not use NVT during production as you 
> will be simulating a constant area, which may not be appropriate.
> 

I’ve parameterized beta-cellulose and then an additional modified form of beta-cellulose for the FF
and i’m looking at the packing behavior of a bundle of strands under the different pressures in water.
Everything seems to be proper apart from encountering the polarization catastrophe randomly,
often times I can restart a downed run and it will move beyond the previous crash point, but this
is tedious and even when I wrote a script to do it for me I found that eventually there was a point
of no return, even if I used the prev_cpt.

I was able to run a single 20mer strand for 2ns without a crash, but my actual system has
many strands in it. Does scalability come into play here, my system is near 60k atoms? I
 know these features are not supported officially yet, are there any suggestions pertaining
 to a possible source of error to check that I wouldn’t have come across in the documentation or
  other forums? 

I’ve been wrestling with getting the production runs going for a few weeks, perhaps its time to just move
to another simulation suite as you mentioned which has full implementation of the drude oscillators.

- Guy





More information about the gromacs.org_gmx-users mailing list