[gmx-users] gromacs.org_gmx-users Digest, Vol 140, Issue 43
Raju Lunkad
r.lunkad at students.iiserpune.ac.in
Fri Dec 11 14:02:11 CET 2015
Dear,
In calcium oxalate monohydrate the charges are,
Type charge
Ca 2
C 0.992
O(oxal..) -0.996
O(water) -0.773
H 0.3665
calcium oxalate monohydrate having two type of bonding which are,
Ox1 oxalate bounded with calcium only
and Ox2, oxalate bounded with calcium and oxygen.So, Kindly help me for
generating itp file for these type of interactions.
On Fri, Dec 11, 2015 at 5:55 PM, <
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:
> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
> 1. Re: G53A6, non-symmetric selection metrix (Piggot T.)
> 2. Performance on multiple GPUs per node (Jens Kr?ger)
> 3. Re: using OH ions in combination with CHARMM27 force field
> (soumadwip ghosh)
> 4. Re: Performance on multiple GPUs per node (Szil?rd P?ll)
> 5. Re: Performance on multiple GPUs per node (Szil?rd P?ll)
> 6. Regarding generating itp file. (Raju Lunkad)
> 7. Re: Regarding generating itp file. (Justin Lemkul)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 11 Dec 2015 11:12:23 +0000
> From: "Piggot T." <T.Piggot at soton.ac.uk>
> To: "gmx-users at gromacs.org" <gmx-users at gromacs.org>
> Subject: Re: [gmx-users] G53A6, non-symmetric selection metrix
> Message-ID:
> <6989AB580484164BA0A88714DBE8242057C1583F at SRV00046.soton.ac.uk>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi Mohsen,
>
> This table can indeed be quite confusing at first. I suggest you take a
> look at http://redmine.gromacs.org/issues/773#note-10 where there is a
> discussion of how to interpret table 8 (for a specific example, but it
> should highlight how it works). Hopefully that should answer your questions.
>
> Cheers
>
> Tom
> ________________________________________
> From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se [
> gromacs.org_gmx-users-bounces at maillist.sys.kth.se] on behalf of Mohsen
> Ramezanpour [ramezanpour.mohsen at gmail.com]
> Sent: 10 December 2015 23:42
> To: Discussion list for GROMACS users
> Subject: [gmx-users] G53A6, non-symmetric selection metrix
>
> Dear All,
>
> Reading the parameters for Gromos 53A6 ff ( article by Oostenbrink *et al.
> Journal of computational chemistry* 25.13 (2004): 1656-1676.), I got
> confused about van der waals interactions between non-bonded atoms.
>
> for two atom types, we use combination rules to get (Cij 6) and (Cij 12),
> and these combination rules (eq. 15 in article) make use of selection
> matrix (table 8 in article). However this selection matrix is not
> symmetric.
>
> This means L-J interactions between two atom types depends to the order we
> put them in [pairs]. Is it correct?
>
> If this is true, then the order of atom numbers in [pair] matters. But it
> does not make sense, why (Cij 12) should be different from (Cji 12)?
>
> Thanks in advance for your comments
>
> Best,
> Mohsen
>
> --
> *Rewards work better than punishment ...*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> ------------------------------
>
> Message: 2
> Date: Fri, 11 Dec 2015 11:54:22 +0100
> From: Jens Kr?ger <krueger at informatik.uni-tuebingen.de>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] Performance on multiple GPUs per node
> Message-ID: <566AAB5E.4070400 at informatik.uni-tuebingen.de>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Dear all,
>
> we are currently planning a new cluster at our universities compute
> centre. The big question on our side is, how many and which GPUs we
> should put into the nodes.
>
> We have access to a test system with four Tesla K80s per Node. Using one
> GPU node we can reach something like 23 ns/day for the ADH system (PME,
> cubic) which is pretty much in line with e.g.,
> http://exxactcorp.com/index.php/solution/solu_list/84
>
> When trying to use 2 or more GPUs on one node, the performance plunges
> to below 10 ns/day no matter how we split the MPI/OMP threads. Has
> anybody of you guys access to a comparable hardware setup? We would be
> interested in benchmark data answering the question: Does GROMACS-5.1
> scales on more than one GPU per node?
>
> Thanks and best wishes,
>
> Jens
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 11 Dec 2015 17:09:20 +0530
> From: soumadwip ghosh <soumadwipghosh at gmail.com>
> To: "gromacs.org_gmx-users"
> <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] using OH ions in combination with CHARMM27
> force field
> Message-ID:
> <CAOci0DZ-c9MQ7jqXfyKm=NcV=Ztr7siCdUKW=
> jJknXKnrC_amw at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Thanks Justin for your help. I will carry out the simulation with 0.1M NaOH
> which I had previously (the coordinate and the topology) and let you know
> if the desired interactions are taking place or not.
>
> Cheers,
> Soumadwip
>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 11 Dec 2015 12:54:09 +0100
> From: Szil?rd P?ll <pall.szilard at gmail.com>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Cc: Discussion list for GROMACS users
> <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] Performance on multiple GPUs per node
> Message-ID:
> <
> CANnYEw4+4p4FCyXVsUMjWTSNY_fjOV2mPtOtmEcX6-+BPhmENg at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hi,
>
> Without details of your benchmarks it's hard to comment on why do you not
> see performance improvement with multiple GPUs per node. Sharing some logs
> would be helpful.
>
> Are you comparing performance with N cores and varying number of GPUs? The
> balance of hardware resources is a key factor to scaling and my guess is
> that your runs are essentially CPU-bound, hence adding more GPUs does not
> help.
>
> Have a look at these papers:
> https://doi.org/10.1002/jcc.24030
> https://doi.org/10.1007/978-3-319-15976-8_1
> especially the former covers the topic quite well and both show scaling of
> <100k protein system to 32-64 nodes (dual socket/dual GPU).
>
> Cheers,
>
> --
> Szil?rd
>
> On Fri, Dec 11, 2015 at 11:54 AM, Jens Kr?ger <
> krueger at informatik.uni-tuebingen.de> wrote:
>
> > Dear all,
> >
> > we are currently planning a new cluster at our universities compute
> > centre. The big question on our side is, how many and which GPUs we
> should
> > put into the nodes.
> >
> > We have access to a test system with four Tesla K80s per Node. Using one
> > GPU node we can reach something like 23 ns/day for the ADH system (PME,
> > cubic) which is pretty much in line with e.g.,
> > http://exxactcorp.com/index.php/solution/solu_list/84
> >
> > When trying to use 2 or more GPUs on one node, the performance plunges to
> > below 10 ns/day no matter how we split the MPI/OMP threads. Has anybody
> of
> > you guys access to a comparable hardware setup? We would be interested in
> > benchmark data answering the question: Does GROMACS-5.1 scales on more
> than
> > one GPU per node?
> >
> > Thanks and best wishes,
> >
> > Jens
> >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 11 Dec 2015 12:54:09 +0100
> From: Szil?rd P?ll <pall.szilard at gmail.com>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Cc: Discussion list for GROMACS users
> <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] Performance on multiple GPUs per node
> Message-ID:
> <
> CANnYEw4+4p4FCyXVsUMjWTSNY_fjOV2mPtOtmEcX6-+BPhmENg at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hi,
>
> Without details of your benchmarks it's hard to comment on why do you not
> see performance improvement with multiple GPUs per node. Sharing some logs
> would be helpful.
>
> Are you comparing performance with N cores and varying number of GPUs? The
> balance of hardware resources is a key factor to scaling and my guess is
> that your runs are essentially CPU-bound, hence adding more GPUs does not
> help.
>
> Have a look at these papers:
> https://doi.org/10.1002/jcc.24030
> https://doi.org/10.1007/978-3-319-15976-8_1
> especially the former covers the topic quite well and both show scaling of
> <100k protein system to 32-64 nodes (dual socket/dual GPU).
>
> Cheers,
>
> --
> Szil?rd
>
> On Fri, Dec 11, 2015 at 11:54 AM, Jens Kr?ger <
> krueger at informatik.uni-tuebingen.de> wrote:
>
> > Dear all,
> >
> > we are currently planning a new cluster at our universities compute
> > centre. The big question on our side is, how many and which GPUs we
> should
> > put into the nodes.
> >
> > We have access to a test system with four Tesla K80s per Node. Using one
> > GPU node we can reach something like 23 ns/day for the ADH system (PME,
> > cubic) which is pretty much in line with e.g.,
> > http://exxactcorp.com/index.php/solution/solu_list/84
> >
> > When trying to use 2 or more GPUs on one node, the performance plunges to
> > below 10 ns/day no matter how we split the MPI/OMP threads. Has anybody
> of
> > you guys access to a comparable hardware setup? We would be interested in
> > benchmark data answering the question: Does GROMACS-5.1 scales on more
> than
> > one GPU per node?
> >
> > Thanks and best wishes,
> >
> > Jens
> >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
> ------------------------------
>
> Message: 6
> Date: Fri, 11 Dec 2015 17:31:08 +0530
> From: Raju Lunkad <r.lunkad at students.iiserpune.ac.in>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] Regarding generating itp file.
> Message-ID:
> <CAJgYQ1=AQL5OQTBf=3DVYWFEt=
> YZz-kd_wQpk4E53xdqdOcDaQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Dear GROMACS Admins,
>
> I want to generate itp file of crystal structure ( calcium oxalate
> monohyrate). Can anyone suggest me how to generate itp file.
>
> Thanking you,
>
> Yours sincerely,
>
> Raju Lunkad
>
>
> ------------------------------
>
> Message: 7
> Date: Fri, 11 Dec 2015 07:25:21 -0500
> From: Justin Lemkul <jalemkul at vt.edu>
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Regarding generating itp file.
> Message-ID: <566AC0B1.1000206 at vt.edu>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
>
>
> On 12/11/15 7:01 AM, Raju Lunkad wrote:
> > Dear GROMACS Admins,
> >
> > I want to generate itp file of crystal structure ( calcium oxalate
> > monohyrate). Can anyone suggest me how to generate itp file.
> >
>
> Ca2+ and water are likely already part of just about any force field.
> Oxalate
> is just two carboxylate groups, which is trivial to put together from acid
> building blocks.
>
> -Justin
>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==================================================
>
>
> ------------------------------
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> End of gromacs.org_gmx-users Digest, Vol 140, Issue 43
> ******************************************************
>
More information about the gromacs.org_gmx-users
mailing list