[gmx-users] gromacs.org_gmx-users Digest, Vol 124, Issue 45
Pepe Sapam
tuleshworisapam at gmail.com
Mon Aug 11 12:09:48 CEST 2014
i wish this have different color
http://www.jabong.com/Lara-Karen-Grey-Sandals-637969.html
With Regards
*S.Tuleshwori Devi *
Research Scholar
Centre for Bioinformatics,
Pondicherry University,
Pondicherry - 605014
On Mon, Aug 11, 2014 at 3:30 PM, <
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:
> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Water molecule near protein surface cannot be settled.
> (Dawid das)
> 2. Re: Can we set the number of pure PME nodes when using
> GPU&CPU? (Theodore Si)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 11 Aug 2014 10:40:26 +0100
> From: Dawid das <addiw7 at googlemail.com>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: Re: [gmx-users] Water molecule near protein surface cannot be
> settled.
> Message-ID:
> <
> CAKSLqn4vbJGd+GZbVCVepE+Gt81yx-W3RW3QFDxSZQEUuvoKTg at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> I forgot to mention that I get similar error for steepest descent
> minimization, not only conjugate gradient. Shall I change some options in
> *mdp files?
>
>
> 2014-08-11 10:32 GMT+01:00 Dawid das <addiw7 at googlemail.com>:
>
> > Dear Gromacs experts,
> >
> > I have encountered quite annoying problem again. When I try either
> > minimization or NVT dynamics of my solvated protein system I get message
> > like this:
> >
> > step -1: Water molecule starting at atom 16281 can not be settled.
> >
> > It is for couple of water molecules. I have checked visually where these
> > molecules are and for minimization they are near surface of protein (not
> > buried inside). I attach the link for *gro, *top, *log and other files.
> > What can I do about it?
> >
> > http://www.speedyshare.com/rN4gx/mCherry7-min.tar.bz2
> > http://www.speedyshare.com/BnwU5/mCherry7-nvt-md.tar.bz2
> > http://www.speedyshare.com/3k9jz/charmm27-files.tar.bz2
> >
> > Best wishes,
> >
> > Dawid Grabarek
> >
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 11 Aug 2014 17:45:37 +0800
> From: Theodore Si <sjyzhxw at gmail.com>
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Can we set the number of pure PME nodes when
> using GPU&CPU?
> Message-ID: <53E890C1.6040401 at gmail.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
> Hi Mark,
>
> This is information of our cluster, could you give us some advice as
> regards to our cluster so that we can make GMX run faster on our system?
>
> Each CPU node has 2 CPUs and each GPU node has 2 CPUs and 2 Nvidia K20M
>
>
> Device Name Device Type Specifications Number
> CPU Node IntelH2216JFFKRNodes CPU: 2?Intel Xeon E5-2670(8 Cores,
> 2.6GHz, 20MB Cache, 8.0GT)
> Mem: 64GB(8?8GB) ECC Registered DDR3 1600MHz Samsung Memory 332
> Fat Node IntelH2216WPFKRNodes CPU: 2?Intel Xeon E5-2670(8 Cores,
> 2.6GHz, 20MB Cache, 8.0GT)
> Mem: 256G(16?16G) ECC Registered DDR3 1600MHz Samsung Memory 20
> GPU Node IntelR2208GZ4GC CPU: 2?Intel Xeon E5-2670(8 Cores,
> 2.6GHz,
> 20MB Cache, 8.0GT)
> Mem: 64GB(8?8GB) ECC Registered DDR3 1600MHz Samsung Memory 50
> MIC Node IntelR2208GZ4GC CPU: 2?Intel Xeon E5-2670(8 Cores,
> 2.6GHz,
> 20MB Cache, 8.0GT)
> Mem: 64GB(8?8GB) ECC Registered DDR3 1600MHz Samsung Memory 5
> Computing Network Switch Mellanox Infiniband FDR Core Switch
> 648? FDR
> Core Switch MSX6536-10R, Mellanox Unified Fabric Manager 1
> Mellanox SX1036 40Gb Switch 36? 40Gb Ethernet Switch SX1036, 36? QSFP
> Interface 1
> Management Network Switch Extreme Summit X440-48t-10G 2-layer Switch
> 48? 1Giga Switch Summit X440-48t-10G, authorized by ExtremeXOS 9
> Extreme Summit X650-24X 3-layer Switch 24? 10Giga 3-layer Ethernet
> Switch Summit X650-24X, authorized by ExtremeXOS 1
> Parallel Storage DDN Parallel Storage System DDN SFA12K Storage
> System 1
> GPU GPU Accelerator NVIDIA Tesla Kepler K20M 70
> MIC MIC Intel Xeon Phi 5110P Knights Corner 10
> 40Gb Ethernet Card MCX314A-BCBT Mellanox ConnextX-3 Chip 40Gb
> Ethernet
> Card
> 2? 40Gb Ethernet ports, enough QSFP cables 16
> SSD Intel SSD910 Intel SSD910 Disk, 400GB, PCIE 80
>
>
>
>
>
>
> On 8/10/2014 5:50 AM, Mark Abraham wrote:
> > That's not what I said.... "You can set..."
> >
> > -npme behaves the same whether or not GPUs are in use. Using separate
> ranks
> > for PME caters to trying to minimize the cost of the all-to-all
> > communication of the 3DFFT. That's still relevant when using GPUs, but if
> > separate PME ranks are used, any GPUs on nodes that only have PME ranks
> are
> > left idle. The most effective approach depends critically on the hardware
> > and simulation setup, and whether you pay money for your hardware.
> >
> > Mark
> >
> >
> > On Sat, Aug 9, 2014 at 2:56 AM, Theodore Si <sjyzhxw at gmail.com> wrote:
> >
> >> Hi,
> >>
> >> You mean no matter we use GPU acceleration or not, -npme is just a
> >> reference?
> >> Why we can't set that to a exact value?
> >>
> >>
> >> On 8/9/2014 5:14 AM, Mark Abraham wrote:
> >>
> >>> You can set the number of PME-only ranks with -npme. Whether it's
> useful
> >>> is
> >>> another matter :-) The CPU-based PME offload and the GPU-based PP
> offload
> >>> do not combine very well.
> >>>
> >>> Mark
> >>>
> >>>
> >>> On Fri, Aug 8, 2014 at 7:24 AM, Theodore Si <sjyzhxw at gmail.com> wrote:
> >>>
> >>> Hi,
> >>>> Can we set the number manually with -npme when using GPU acceleration?
> >>>>
> >>>>
> >>>> --
> >>>> Gromacs Users mailing list
> >>>>
> >>>> * Please search the archive at http://www.gromacs.org/
> >>>> Support/Mailing_Lists/GMX-Users_List before posting!
> >>>>
> >>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>>
> >>>> * For (un)subscribe requests visit
> >>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>>> send a mail to gmx-users-request at gromacs.org.
> >>>>
> >>>>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/
> >> Support/Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >>
>
>
>
> ------------------------------
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
>
> End of gromacs.org_gmx-users Digest, Vol 124, Issue 45
> ******************************************************
>
More information about the gromacs.org_gmx-users
mailing list