[gmx-users] Problem with Group and Verlet cut -off scheme

Ben Tam btam125 at hotmail.co.uk
Fri Jul 8 18:05:09 CEST 2016


Dear all,


My systems size is:


    2.58730   2.24067   2.06100


And I am not using a pressure coupling, furthermore I did not input
periodic_molecules = yes

Henceforth, should I just make the system bigger? at the moment I am simulating just 574 molecules.

Best regards,

Ben


________________________________
From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of Justin Lemkul <jalemkul at vt.edu>
Sent: 08 July 2016 16:30
To: gmx-users at gromacs.org
Subject: Re: [gmx-users] Problem with Group and Verlet cut -off scheme



On 7/8/16 10:54 AM, Ben Tam wrote:
> Hi Justin,
>
>
> This is what I got from the log file:
>
>
> Initializing Domain Decomposition on 6 ranks
> Dynamic load balancing: auto
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
>     two-body bonded interactions: 4.882 nm, Bond, atoms 456 457
>   multi-body bonded interactions: 4.915 nm, Angle, atoms 457 459
> Minimum cell size due to bonded interactions: 5.407 nm
> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> Optimizing the DD grid for 6 cells with a minimum initial size of 6.758 nm
> The maximum allowed number of cells is: X 0 Y 0 Z 0
>
>
>
> But when I check, this the atoms that is limiting (from .gro file) :
>
>
>     1MOL     C1  455   0.728   1.339   1.541
>     1MOL     C2  456   0.696   1.224   1.636
>     1MOL     C3  457   0.737   1.220   1.769
>     1MOL     C4  458   0.692   1.112   1.851
>     1MOL     H4  459   0.713   1.112   1.951
>
>
> And non of them tells me they are over 4nm, which I am not sure where else it could go wrong. The smallest length of my box is 2 nm and my cut-off length is 1 nm. Thank you for your help.
>
>

Are your molecules infinite, e.g. bonded across PBC by using "periodic_molecules
= yes"?  If so, that would probably explain it.

But the bigger issue is that your system may be too small to be efficiently
parallelized via DD.  You're probably going to be losing performance.  Your
smallest vector is 2 nm, but what are the other vectors?  Is the system cubic?
You can also try -ntmpi 1 and using OpenMP only for parallelization.  Also note
that if you're using pressure coupling, you may have lots of minimum image
violations, depending on what the other box vectors are.

> By the way, do I reply to the gmx-user at gromacs.org address or how do I reply without disturbing other users?
>

Reply to that address to send to the list.  Support is provided by the
community, not any one specific person, and the open posting of problems and
solutions helps everyone (and fills a useful archive).

-Justin

--
==================================================

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul at outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul
[http://mackerell.umaryland.edu/~jalemkul/Images/lemkul_small.jpg]<http://mackerell.umaryland.edu/~jalemkul>

Justin A. Lemkul - MacKerell Lab<http://mackerell.umaryland.edu/~jalemkul>
mackerell.umaryland.edu
Welcome to my site! To learn more about me and the work I am doing, please use the navigation links above.




==================================================
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list