[gmx-users] System almost blowing up

Justin Lemkul jalemkul at vt.edu
Mon Apr 3 19:53:06 CEST 2017



On 4/3/17 1:42 PM, Dayhoff, Guy wrote:
> Hi,
>
>    I’m receiving a couple (~15) EM did not converge warnings as well as a few (~3) 1,4 interaction
> warnings during my run. It looks like it starts down the path to blowing up but recovers. Is this “recovery”
> system dependent? Should I take these messages (even without the subsequent crash) to indicate an
> underlying issue with my system/topology or equilibration? I’m not ignoring or circumventing any warning
> during pdb2gmx, or any other commands.
>
> For more context: I’m running the Drude-2013 forcefield, with a dt of .5fs emtol of 1.0 and niter of 150
> using the V-rescale thermostat and Berendsen pressure coupling while my systems cell shape relaxes
> as a continuation from position restrained NVT then NPT ensembles. The starting structures were minimized
> in vacuum, then solvated and minimized once again prior to the posres equilibration.
>

Such a short dt and strict niter should not be necessary in practice.  The 
failure of SCF to converge/LINCS warnings is what we typically refer to as 
polarization catastrophe, so your system is on the brink of instability.  This 
is one of the inherent problems of SCF in polarizable systems; it often fails to 
converge and leaves the system potentially unstable.  The reflective hardwall is 
much more reliable (and faster).  If the instability is happening in your 
equilibration, that may be OK but it is something you should try to troubleshoot 
to make sure there's nothing else going wrong.

-Justin

-- 
==================================================

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul at outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==================================================


More information about the gromacs.org_gmx-users mailing list