[gmx-users] Anybody using Silica InterfaceFF on Gromacs?

Diez Fernandez, Amanda amanda.diez10 at imperial.ac.uk
Mon Jul 3 01:45:53 CEST 2017


Hi,
I have now solved the problem.
I have added a list of the 1-4 pairs within [ Pairs ] and it has led to
the correct equilibrium bond length.
Just as reference, information on the forcefield I am using is found in
this paper: http://pubs.acs.org/doi/abs/10.1021/cm500365c
Cheers,
Amanda




On 30/06/2017, 01:46, "gromacs.org_gmx-users-bounces at maillist.sys.kth.se
on behalf of gromacs.org_gmx-users-request at maillist.sys.kth.se"
<gromacs.org_gmx-users-bounces at maillist.sys.kth.se on behalf of
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:

>Send gromacs.org_gmx-users mailing list submissions to
>	gromacs.org_gmx-users at maillist.sys.kth.se
>
>To subscribe or unsubscribe via the World Wide Web, visit
>	https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
>or, via email, send a message with subject or body 'help' to
>	gromacs.org_gmx-users-request at maillist.sys.kth.se
>
>You can reach the person managing the list at
>	gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
>When replying, please edit your Subject line so it is more specific
>than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
>Today's Topics:
>
>   1. Re: NoPbc (Mark Abraham)
>   2. Re: RemvingDummyAtoms (Mark Abraham)
>   3. Re: gromacs.org_gmx-users Digest, Vol 158, Issue 186 (Thanh Le)
>   4. Re: gromacs.org_gmx-users Digest, Vol 158, Issue 186
>      (Mark Abraham)
>   5. Re: gromacs.org_gmx-users Digest, Vol 158, Issue 186
>      (Mark Abraham)
>   6. Re: Anybody using Silica InterfaceFF on Gromacs? (Alex)
>
>
>----------------------------------------------------------------------
>
>Message: 1
>Date: Thu, 29 Jun 2017 22:12:46 +0000
>From: Mark Abraham <mark.j.abraham at gmail.com>
>To: gmx-users at gromacs.org
>Subject: Re: [gmx-users] NoPbc
>Message-ID:
>	<CAMNuMATjL1Zfpgq+Xj9PyCZ2SECQGfusWdtG_JX9h8Lqhj+pxw at mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>On Thu, Jun 29, 2017 at 7:29 PM Mostafa Javaheri
><javaheri.gromacs at gmail.com>
>wrote:
>
>> Dear gmx users
>>
>> I'm doing QM/MM simulation by using ORCA-Gromacs interface, the whole
>> simulation will take only 1 ps and I need to set pbc off. For this short
>> period of simulation does the pressure of the box change?
>
>
>There's no box if pbc is off. What would its walls be made of? Thus, no
>external pressure.
>
>
>> Does the water
>> vaporize after 500 steps? Any helps will be appreciated.
>>
>
>Water will start to evaporate, but probably not noticeable in that time
>frame. Run a classical MD with the same setup for 1 ps on just a globule
>of
>water to see.
>
>Mark
>
>
>> Regards
>>
>> M.Javaheri
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
>
>
>------------------------------
>
>Message: 2
>Date: Thu, 29 Jun 2017 22:13:36 +0000
>From: Mark Abraham <mark.j.abraham at gmail.com>
>To: gmx-users at gromacs.org
>Subject: Re: [gmx-users] RemvingDummyAtoms
>Message-ID:
>	<CAMNuMAQEpBc4+kcoBp+tnY7-hEVufz-sEACsH4KG5Uqic6USrA at mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>Hi,
>
>gmx trjconv can apply a selection (e.g. generated with gmx select) to your
>trajectory frames. gmx convert-tpr can do the same to your tpr, which you
>might need to help analysis work later on.
>
>Mark
>
>On Thu, Jun 29, 2017 at 10:57 PM Mostafa Javaheri <
>javaheri.gromacs at gmail.com> wrote:
>
>> Dear gmx users
>>
>> I really appreciate it if anyone could tell me how could I remove dummy
>> atoms from my trajectory? I made them before running the simulation for
>> adding repulsive potential and now for analyzing the results there is no
>> need for them.
>>
>> Regards
>>
>> M.Javaheri
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
>
>
>------------------------------
>
>Message: 3
>Date: Thu, 29 Jun 2017 15:20:44 -0700
>From: Thanh Le <thanh.q.le at sjsu.edu>
>To: gromacs.org_gmx-users at maillist.sys.kth.se
>Subject: Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 158, Issue
>	186
>Message-ID: <CD7273ED-6323-4343-B324-0C2A80C1376B at sjsu.edu>
>Content-Type: text/plain;	charset=us-ascii
>
>Hi Mr. Abraham.
>My system is quite small, only about 8000 atoms. I have run this system
>for 100 ns, which took roughly about 2 days. Hence, a run of 1
>microsecond would take about 20 days. I am trying to shorten it down to 2
>days by using more than 1 node.
>Thanks,
>Thanh Le
>> On Jun 29, 2017, at 3:10 PM,
>>gromacs.org_gmx-users-request at maillist.sys.kth.se wrote:
>> 
>>> http://www.gromacs.org/Support/Mailing_Lists
>>><http://www.gromacs.org/Support/Mailing_Lists>
>
>
>------------------------------
>
>Message: 4
>Date: Thu, 29 Jun 2017 22:37:54 +0000
>From: Mark Abraham <mark.j.abraham at gmail.com>
>To: gmx-users at gromacs.org, gromacs.org_gmx-users at maillist.sys.kth.se
>Subject: Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 158, Issue
>	186
>Message-ID:
>	<CAMNuMAR2LZXR-6fZ+JqDerdDommxCPfq7hmnXzaK+ShUZpA5BA at mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>Hi,
>
>Don't even consider running on more than one node. You can see for
>yourself
>by comparing the performance of even just e.g.
>
>gmx mdrun -nt 1 -pin on
>gmx mdrun -nt 2 -pin on
>gmx mdrun -nt 14 -pin on
>gmx mdrun -nt 28 -pin on
>
>... to run on different numbers of cores. Parallel efficiency drops off as
>you approach 100 atoms per core.
>
>Further, the factor of seven in the core count is a surefire way to be
>inefficient, because the domain decomposition will have to partition in
>seven domains in one direction. I would consider running three simulations
>per node, with 9,10,9 cores per simulation, using gmx mdrun -nt x -pin on
>-pin_offset y for suitable x and y. But try the above experiment first.
>
>Mark
>
>On Fri, Jun 30, 2017 at 12:21 AM Thanh Le <thanh.q.le at sjsu.edu> wrote:
>
>> Hi Mr. Abraham.
>> My system is quite small, only about 8000 atoms. I have run this system
>> for 100 ns, which took roughly about 2 days. Hence, a run of 1
>>microsecond
>> would take about 20 days. I am trying to shorten it down to 2 days by
>>using
>> more than 1 node.
>> Thanks,
>> Thanh Le
>> > On Jun 29, 2017, at 3:10 PM,
>> gromacs.org_gmx-users-request at maillist.sys.kth.se wrote:
>> >
>> >> http://www.gromacs.org/Support/Mailing_Lists <
>> http://www.gromacs.org/Support/Mailing_Lists>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
>
>
>------------------------------
>
>Message: 5
>Date: Thu, 29 Jun 2017 22:37:54 +0000
>From: Mark Abraham <mark.j.abraham at gmail.com>
>To: gmx-users at gromacs.org, gromacs.org_gmx-users at maillist.sys.kth.se
>Subject: Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 158, Issue
>	186
>Message-ID:
>	<CAMNuMAR2LZXR-6fZ+JqDerdDommxCPfq7hmnXzaK+ShUZpA5BA at mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>Hi,
>
>Don't even consider running on more than one node. You can see for
>yourself
>by comparing the performance of even just e.g.
>
>gmx mdrun -nt 1 -pin on
>gmx mdrun -nt 2 -pin on
>gmx mdrun -nt 14 -pin on
>gmx mdrun -nt 28 -pin on
>
>... to run on different numbers of cores. Parallel efficiency drops off as
>you approach 100 atoms per core.
>
>Further, the factor of seven in the core count is a surefire way to be
>inefficient, because the domain decomposition will have to partition in
>seven domains in one direction. I would consider running three simulations
>per node, with 9,10,9 cores per simulation, using gmx mdrun -nt x -pin on
>-pin_offset y for suitable x and y. But try the above experiment first.
>
>Mark
>
>On Fri, Jun 30, 2017 at 12:21 AM Thanh Le <thanh.q.le at sjsu.edu> wrote:
>
>> Hi Mr. Abraham.
>> My system is quite small, only about 8000 atoms. I have run this system
>> for 100 ns, which took roughly about 2 days. Hence, a run of 1
>>microsecond
>> would take about 20 days. I am trying to shorten it down to 2 days by
>>using
>> more than 1 node.
>> Thanks,
>> Thanh Le
>> > On Jun 29, 2017, at 3:10 PM,
>> gromacs.org_gmx-users-request at maillist.sys.kth.se wrote:
>> >
>> >> http://www.gromacs.org/Support/Mailing_Lists <
>> http://www.gromacs.org/Support/Mailing_Lists>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
>
>
>------------------------------
>
>Message: 6
>Date: Thu, 29 Jun 2017 18:46:02 -0600
>From: Alex <nedomacho at gmail.com>
>To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>Subject: Re: [gmx-users] Anybody using Silica InterfaceFF on Gromacs?
>Message-ID:
>	<CAMJZ6qFiWfCzPES-i+ENgWi9Y+HDXDSnAq6pmsA010nZXr-CuQ at mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>>
>>
>>>
>>> He he, childish :)
>>
>> David, no offense intended. I just think that when applied to solids,
>>the
>entire concept of what works so well for biomolecular systems becomes a
>bit
>of a joke. And vice versa, to be fair. Spoken from experience, really --
>we
>here used Gromacs to simulate things that I keep telling people not to
>simulate with Gromacs, and it got published!. :)
>
>In any case, I second what was said above re: # of exclusions. Solid-state
>potentials use smooth drop-offs to exclude long-range interactions between
>close neighbor sharing elements, so looking into David's suggestion may in
>fact fix the issues immediately.
>
>Alex
>
>
>------------------------------
>
>-- 
>Gromacs Users mailing list
>
>* Please search the archive at
>http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>posting!
>
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>* For (un)subscribe requests visit
>https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>send a mail to gmx-users-request at gromacs.org.
>
>End of gromacs.org_gmx-users Digest, Vol 158, Issue 187
>*******************************************************



More information about the gromacs.org_gmx-users mailing list