[gmx-users] gromacs.org_gmx-users Digest, Vol 118, Issue 104

Singam Karthick sikart21 at yahoo.in
Mon Feb 24 16:21:26 CET 2014


Dear Francis
we are running in the  Xeon E5-2670 8C 2.60GHz (2 CPUs , 8 cores, 16 threads) for each temperature. and the exchange attempt frequency is 500 steps. The other system with 126 replicas run 30 ns per day ( system size of  ~38000 atoms). could you please help us in solving this problem

regards
singam





On Monday, 24 February 2014 4:38 PM, "gromacs.org_gmx-users-request at maillist.sys.kth.se" <gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:
 
Send gromacs.org_gmx-users mailing list submissions to
    gromacs.org_gmx-users at maillist.sys.kth.se

To subscribe or unsubscribe via the World Wide Web, visit
    https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or, via email, send a message with subject or body 'help' to
    gromacs.org_gmx-users-request at maillist.sys.kth.se

You can reach the person managing the list at
    gromacs.org_gmx-users-owner at maillist.sys.kth.se

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gromacs.org_gmx-users digest..."


Today's Topics:

   1. REMD slow's down drastically (Singam Karthick)
   2. Re: REMD slow's down drastically (Francis Jing)
   3. Re: hybrid CPU/GPU nodes (Gloria Saracino)
   4. Position restraints (davhak)


----------------------------------------------------------------------

Message: 1
Date: Mon, 24 Feb 2014 15:32:03 +0800 (SGT)
From: Singam Karthick <sikart21 at yahoo.in>
To: "gromacs.org_gmx-users at maillist.sys.kth.se"
    <gromacs.org_gmx-users at maillist.sys.kth.se>
Subject: [gmx-users] REMD slow's down drastically
Message-ID:
    <1393227123.11035.YahooMailNeo at web192803.mail.sg3.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1

Dear members,
I am trying to run REMD simulation for poly Alanine (12 residue) system. I used remd generator to get the range of temperature with the exchange probability of 0.3. I was getting the 125 replicas. I tried to simulate 125 replicas its drastically slow down the simulation time (for 70 pico seconds it took around 17 hours ) could anyone please tell me how to solve this issue.

Following is the MDP file?

title ? ? ? ? ? = G4Ga3a4a5 production.?
;define ? ? ? ? = ;-DPOSRES ? ? ; position restrain the protein
; Run parameters
integrator ? ? ?= md ? ? ? ? ? ?; leap-frog integrator
nsteps ? ? ? ? ?= 12500000 ? ? ?; 2 * 5000000 = 3ns
dt ? ? ? ? ? ? ?= 0.002 ? ? ? ? ; 2 fs
; Output control
nstxout ? ? ? ? = 0 ? ? ? ? ? ? ; save coordinates every 0.2 ps
nstvout ? ? ? ? = 10000 ? ? ? ? ; save velocities every 0.2 ps
nstxtcout ? ? ? = 500 ? ? ? ? ? ; save xtc coordinate every 0.2 ps
nstenergy ? ? ? = 500 ? ? ? ? ? ; save energies every 0.2 ps
nstlog ? ? ? ? ?= 100 ? ? ? ? ? ; update log file every 0.2 ps
; Bond parameters
continuation ? ?= yes ? ? ? ? ? ; Restarting after NVT?
constraint_algorithm = lincs ? ?; holonomic constraints?
constraints ? ? = hbonds ? ? ? ?; all bonds (even heavy atom-H bonds) constrained
lincs_iter ? ? ?= 1 ? ? ? ? ? ? ; accuracy of LINCS
lincs_order ? ? = 4 ? ? ? ? ? ? ; also related to accuracy
morse ? ? ? ? ? = no
; Neighborsearching
ns_type ? ? ? ? = grid ? ? ? ? ?; search neighboring grid cels
nstlist ? ? ? ? = 5 ? ? ? ? ? ? ; 10 fs
rlist ? ? ? ? ? = 1.0 ? ? ? ? ? ; short-range neighborlist cutoff (in nm)
rcoulomb ? ? ? ?= 1.0 ? ? ? ? ? ; short-range electrostatic cutoff (in nm)
rvdw ? ? ? ? ? ?= 1.0 ? ? ? ? ? ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype ? ? = PME ? ? ? ? ? ; Particle Mesh Ewald for long-range electrostatics
pme_order ? ? ? = 4 ? ? ? ? ? ? ; cubic interpolation
fourierspacing ?= 0.16 ? ? ? ? ?; grid spacing for FFT
; Temperature coupling is on
tcoupl ? ? ? ? ?= V-rescale ? ? ; modified Berendsen thermostat
tc-grps ? ? ? ? = ?protein SOL Cl ? ? ? ;two coupling groups - more accurate
tau_t ? ? ? ? ? ? ? ? = 0.1 0.1 ?0.1 ; time constant, in ps
ref_t ? ? ? ? ? ? ? ? = XXXXX ?XXXXX ?XXXXX ? ?; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl ? ? ? ? ?= Parrinello-Rahman ? ? ; Pressure coupling on in NPT
pcoupltype ? ? ?= isotropic ? ? ; uniform scaling of box vectors
tau_p ? ? ? ? ? = 2.0 ? ? ? ? ? ; time constant, in ps
ref_p ? ? ? ? ? = 1.0 ? ? ? ? ? ; reference pressure, in bar
compressibility = 4.5e-5 ? ? ? ?; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc ? ? ? ? ? ? = xyz ? ? ? ? ? ; 3-D PBC
; Dispersion correction

DispCorr ? ? ? ?= EnerPres ? ? ?; account for cut-off vdW scheme


regards
singam


------------------------------

Message: 2
Date: Mon, 24 Feb 2014 15:52:39 +0800
From: Francis Jing <francijing at gmail.com>
To: gmx-users at gromacs.org, Singam Karthick <sikart21 at yahoo.in>
Subject: Re: [gmx-users] REMD slow's down drastically
Message-ID:
    <CACg4Yc3q88pyybKghyq==F_fGYn1hHvTtigTgwqMeH3gfRveag at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Because every replica has to be calculated separately, it seems the speed
is not that slow (125 * 0.07 ~ 9 ns).

Also, to evaluate it, you should post how many CPUs you used, and other
information like exchange attempt frequency...


Francis


On Mon, Feb 24, 2014 at 3:32 PM, Singam Karthick <sikart21 at yahoo.in> wrote:

> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system. I
> used remd generator to get the range of temperature with the exchange
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
> replicas its drastically slow down the simulation time (for 70 pico seconds
> it took around 17 hours ) could anyone please tell me how to solve this
> issue.
>
> Following is the MDP file
>
> title           = G4Ga3a4a5 production.
> ;define         = ;-DPOSRES     ; position restrain the protein
> ; Run parameters
> integrator      = md            ; leap-frog integrator
> nsteps          = 12500000      ; 2 * 5000000 = 3ns
> dt              = 0.002         ; 2 fs
> ; Output control
> nstxout         = 0             ; save coordinates every 0.2 ps
> nstvout         = 10000         ; save velocities every 0.2 ps
> nstxtcout       = 500           ; save xtc coordinate every 0.2 ps
> nstenergy       = 500           ; save energies every 0.2 ps
> nstlog          = 100           ; update log file every 0.2 ps
> ; Bond parameters
> continuation    = yes           ; Restarting after NVT
> constraint_algorithm = lincs    ; holonomic constraints
> constraints     = hbonds        ; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter      = 1             ; accuracy of LINCS
> lincs_order     = 4             ; also related to accuracy
> morse           = no
> ; Neighborsearching
> ns_type         = grid          ; search neighboring grid cels
> nstlist         = 5             ; 10 fs
> rlist           = 1.0           ; short-range neighborlist cutoff (in nm)
> rcoulomb        = 1.0           ; short-range electrostatic cutoff (in nm)
> rvdw            = 1.0           ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype     = PME           ; Particle Mesh Ewald for long-range
> electrostatics
> pme_order       = 4             ; cubic interpolation
> fourierspacing  = 0.16          ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl          = V-rescale     ; modified Berendsen thermostat
> tc-grps         =  protein SOL Cl       ;two coupling groups - more
> accurate
> tau_t                 = 0.1 0.1  0.1 ; time constant, in ps
> ref_t                 = XXXXX  XXXXX  XXXXX    ; reference temperature,
> one for each group, in K
> ; Pressure coupling is on
> pcoupl          = Parrinello-Rahman     ; Pressure coupling on in NPT
> pcoupltype      = isotropic     ; uniform scaling of box vectors
> tau_p           = 2.0           ; time constant, in ps
> ref_p           = 1.0           ; reference pressure, in bar
> compressibility = 4.5e-5        ; isothermal compressibility of water,
> bar^-1
> ; Periodic boundary conditions
> pbc             = xyz           ; 3-D PBC
> ; Dispersion correction
>
> DispCorr        = EnerPres      ; account for cut-off vdW scheme
>
>
> regards
> singam
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>



-- 
Zhifeng (Francis) Jing
Graduate Student in Physical Chemistry
School of Chemistry and Chemical Engineering
Shanghai Jiao Tong University
http://sun.sjtu.edu.cn


------------------------------

Message: 3
Date: Mon, 24 Feb 2014 08:09:04 +0000 (GMT)
From: Gloria Saracino <glosara at yahoo.it>
To: Discussion list for GROMACS users <gmx-users at gromacs.org>
Subject: Re: [gmx-users] hybrid CPU/GPU nodes
Message-ID:
    <1393229344.25905.YahooMailNeo at web173206.mail.ir2.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1

We would get one/two nodes completely dedicated to molecular dynamics with GPU
and we had originally thought to 

2 x Xeon 6-Core E5-2630v2 2,6Ghz 15MB + 2 x NVIDIA Tesla K20C
but probably the CPU cores should be doubled and 

In addition one/two nodes completely CPU
for other software that does not use GPU but still suitable for molecular dynamics
with 4 x AMD Opteron 16-Core 6272 2,1Ghz 115W.
All nodes would be connected with infiniband.

Cheers,

Gloria



________________________________
Da: Szil?rd P?ll <pall.szilard at gmail.com>
A: Gloria Saracino <glosara at yahoo.it>; Discussion list for GROMACS users <gmx-users at gromacs.org> 
Inviato: Venerd? 21 Febbraio 2014 16:05
Oggetto: Re: [gmx-users] hybrid CPU/GPU nodes


Please keep the discussion on the mailing list.

You have still not described what exactly you want to accomplish. Get
new nodes? Upgrading machines with GPUs?

On Fri, Feb 21, 2014 at 10:12 AM, Gloria Saracino <glosara at yahoo.it> wrote:
> Thank you very much for your suggestions ... we have to have to keep an eye
> on expenses.
> The vendor proposed us to mount two GPU on the same node, but having
> multiple GPUs on the same node could be beneficial?

Multiple GPUs in a node can be beneficial.

> In this case the number of CPUcores should be increased accordingly?

In fact, typically one GPU per CPU (socket) is a good balance, but as
I said before, this greatly depends on the kind of CPUs and GPUs used.

> We would like to use these nodes to simulate large multi-molecules systems
> in explicit water and treating electrostatic interactions with PME. The aim
> is to gradually increase dimensions and simulation times starting from about
> 200.000 grains of CG force field that is about 800.000 atoms in all-atom ff.

That sounds like a decent candidate for running on GPUs. However, you
did not mention what network you have either.

Cheers,
--
Szil?rd

> Bye,
> Gloria
>
>
> ________________________________
> Da: Szil?rd P?ll <pall.szilard at gmail.com>
> A: Discussion list for GROMACS users <gmx-users at gromacs.org>; Gloria
> Saracino <glosara at yahoo.it>
> Inviato: Venerd? 21 Febbraio 2014 1:10
> Oggetto: Re: [gmx-users] hybrid CPU/GPU nodes
>
> Hi,
>
> Unfortunately the answer is not as simple as "use 6-8 cores and you'll
> be fine". Balanced hardware from the GROMACS perspective depends
> greatly on what kind of CPUs and GPUs are used as on the type of
> simulations you plan to run (system size, cut-off, single-node or
> multi-node, etc.).
>
> If budget is not a strong limiting factor, I'd suggest dual-socket
> Xeon Ivy Bridge nodes with 8-12 cores and Tesla K20/K40 or if you want
> to avoid spending $5k on a single card Geforce TITAN Black and 780 Ti
> can match and even beat the Teslas.
>
> No advertisement intended, but one thing that can help you is to try
> the "GPU Test drive" which will allow you to test actual hardware.
> AFAIK they even have pre-installed GROMACS, but I can't vouch for the
> correctness of the installations.
>
> Feel free to post concrete hardware specs you are considering and log
> files of test too if you'll have any.
>
> Cheers,
> --
> Szil?rd
>
>
> On Thu, Feb 20, 2014 at 2:44 PM, Gloria Saracino <glosara at yahoo.it> wrote:
>> Hallo,
>> we are evaluating the possibility of expanding our cluster with hybrid
>> CPU/GPU node. Which could be the better proportion CPUcores/GPU on the same
>> nodeto obtain the best performance of Gromacs 4.6?
>> Thank you in advance,
>>
>> Gloria Saracino
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
>> a mail to gmx-users-request at gromacs.org.
>
>

------------------------------

Message: 4
Date: Mon, 24 Feb 2014 01:22:49 -0800 (PST)
From: davhak <davhak at gmail.com>
To: gmx-users at gromacs.org
Subject: [gmx-users] Position restraints
Message-ID: <1393233769201-5014765.post at n6.nabble.com>
Content-Type: text/plain; charset=us-ascii

Dear All,

I try to restrain the movement of a certain particle in all cholesterol 
molecules (coarse-grained model) in Z direction. To do this I add a [
position_restraints ] section under the [ moleculetype ] of cholesterol itp
file like:

[moleculetype]
  CHOL         1
...

[ position_restraints ]
;  i funct       fcx        fcy        fcz
8    1          0          0       100000

The problem is that during the simulation the restrained atoms already after
a few hundred ns move by ~0.2 nm in average.  No matter whether the
restraining force constant is set to 100000 0r 1000. The log file shows that
the restraining energy varies around 300-500 kJ/mol regardless of the
restraining force constant value (there are over 200 cholesterol molecules
in the system).

As expected one gets the same outcome when the position restraint is applied
according to the Gromacs manual i.e. by setting a define in the mdp file and
including a separate position_restraints] itp file within an #ifdef #endif
clause right after the inclusion of the cholesterol itp file.

It should be something trivial missing in my understanding of how position
restraint should be applied. 

Thanks very much for any suggestion.

--
View this message in context: http://gromacs.5086.x6.nabble.com/Position-restraints-tp5014765.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.


------------------------------

-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


End of gromacs.org_gmx-users Digest, Vol 118, Issue 104
*******************************************************


More information about the gromacs.org_gmx-users mailing list