[gmx-users] large sim box

Mark Abraham mark.abraham at anu.edu.au
Wed May 26 01:41:02 CEST 2010


----- Original Message -----
From: Yan Gao <y1gao at ucsd.edu>
Date: Wednesday, May 26, 2010 9:19
Subject: Re: [gmx-users] large sim box
To: Discussion list for GROMACS users <gmx-users at gromacs.org>

> Hi Mark,
> 
> Thank you for your comments! They are very helpful.
> 
> I still have several questions regarding your comments:
> 1. Which constraints should I apply for 2fs stepsize? hbonds, all-bonds, h-angles, all-angles

Probably all-bonds, but there's no substitute for reading in learning about such things. Start with the GROMACS manual (which is an excellent resource!) and the papers to which it refers. Then consult some literature on systems similar to yours to get an idea what others think is sound. Do all the tutorial material you can get your hands on.
 
> I am simulating molecules that contain carbon nanotubes and polymer chains.
> May I have your suggestions based on your experience? Thanks.

Sorry, I don't have any there, and there's doubtless a range.

> 2.If I understand correctly, it is enough to use the default value for xtc-precision = 1000. 
> I am not quite understand about this. Does 1000 means that it is calculated with precision of 0.001 nm? and 1000000 for 1e-6 nm? Thanks.

Sounds right. Again, check the manual, section 7.3

> 3. Regarding the MPI, may I have your suggestions on any illustrative examples of parallelization with MPI for gromacs?

I'm not sure what you're looking for. There are http://www.gromacs.org/index.php?title=Download_%26_Installation/Installation_Instructions, and reports of MPI simulations in the JCTC paper I linked last time. There's tutorial material out there in Google land that probably guides you through running such calculations.

> How can I know if the cluster support MPI? The cluster I am using have linux system.

Ask the admins, see what software is installed, etc. Do be aware that unless the interconnect is fast (i.e. better than gigabit ethernet) then you will not see much benefit from parallelization.

>  It seems from the FAQ that I need to re-configure the Gromacs to enable mpi. Do I need additional software/program to support the parallelization? Can I implement the parallelization just like I implement the single processor task? or with several more bash commands? Thanks.

"Implement" implies writing code, which you do not need or want to do. See http://www.lam-mpi.org/tutorials/lam/ for some information.

Mark
 
> Thank you very much for your time and help!
> 
> Young
> 
> 
> 
> 
> On Mon, May 24, 2010 at 1:02 PM, Mark Abraham <mark.abraham at anu.edu.au> wrote:



> ----- Original Message -----
> 
From: Yan Gao <y1gao at ucsd.edu>
> 
Date: Tuesday, May 25, 2010 3:02
> 
Subject: [gmx-users] large sim box
> 
To: Discussion list for GROMACS users <gmx-users at gromacs.org>

> 
> 
> Hi There,
> 
>
> 
> I want to use a large simulation box. I did a trial
> 
with 15 * 15 * 15 nm box for 100 steps. genbox_d generates 110k water
> 
molecules, or 330k atoms.
> 
>
> 
> It looks like that gromacs can run that
> 
 large number of atoms. I am sure it will take a long long long time.
> 
However if I really want to simulate it, is there any way that I can
> 
increase the speed? (except using a better cpu, or paralleling it)
> 
Thanks.

> 

> You can control the cost through choice of algorithm and implementation. That means you need to learn how they work and whether some trade-offs are suitable for you. That's going to mean lots of reading, and some experimentation on more tractable systems. Learn to walk before you try to run! However, the only serious way to approach a system this large is with parallelization. Also, reconsider your use of double precision.




> 
> 
> My second question is that: If I have to use clusters or super
> 
computer, which one is better? and, do I need a particular
> 
software/program to paralleling it? Thanks.

> 

> GROMACS does parallelization using MPI, which will be available on any machine you can find. There are platforms for which GROMACS does not have the specially-optimized non-bonded inner loops - avoid such platforms if you have the choice. You should read the 2008 GROMACS JCTC paper.




> 
> 
>
> 
> I put my .mdp below:
> 
> integrator               = md
> 
> dt                      
> 
 = 0.002
> 
> ; duration  2000 ps
> 
> nsteps                   = 100
> 
> comm_mode   
> 
      = linear
> 
> nstcomm             = 1
> 
> ; dump config every 300 fs
> 
>
> 
nstxout                  = 10
> 
> nstvout                  = 10
> 
> nstfout   
> 
          = 10

> 

> Writing output of all of energies, forces and velocities this often is a waste of time in production simulations. Adjacent data points 10 MD steps apart will be strongly correlated, even if you plan to use the force and/or velocity data. Consider the needs of your analysis, and probably plan to use nstxtcout instead of any of these.




> 
> 
> nstcheckpoint            = 100
> 
> nstlog                  
> 
 = 10
> 
> nstenergy                = 10
> 
> nstxtcout                = 10
> 
>
> 
xtc-precision            = 1000000

> 

> Read what this does.

> 
> 
> nstlist                  = 1
> 
> ns_type                 
> 
 = grid
> 
> pbc                       = xyz
> 
> rlist                    =
> 
 1.0    ;1.0
> 
> coulombtype              = PME
> 
> rcoulomb                
> 
 = 1.0    ;1.0
> 
>
> 
fourierspacing         = 0.2    ;0.1

> 

> That will noticeably reduce the cost of PME, but its effect on accuracy is not well known.

> 
> 
> pme_order         = 4
> 
> ewald_rtol
> 
          = 1e-5
> 
> optimize_fft         = yes
> 
> vdwtype             =
> 
cut-off
> 
> rvdw                     = 1.0    ;1.0
> 
> tcoupl                  
> 
 = Nose-Hoover
> 
>
> 
tc_grps                  = system

> 

> This is often a poor choice. grompp probably told you that.

> 
> 
> tau_t                    = 0.5
> 
> ref_t                   
> 
 = 300.0
> 
> Pcoupl                   = no
> 
> annealing         = no
> 
> gen_vel                 
> 
 = no
> 
> gen_temp                 = 300.0
> 
>
> 
gen_seed                 = 173529
> 
> constraints              = none

> 

> You must use constraints if you wish a 2fs timestep.

> 
> 
> ;energy_excl             
> 
 = C_H C_H
> 
> constraint_algorithm     =  lincs
> 
> unconstrained_start   
> 
  = no
> 
> lincs_order         = 4
> 
> lincs_iter         = 1

> 

> Mark

> --
> 
gmx-users mailing list    gmx-users at gromacs.org

> http://lists.gromacs.org/mailman/listinfo/gmx-users
> 
Please search the archive at http://www.gromacs.org/search before posting!
> 
Please don't post (un)subscribe requests to the list. Use the
> 
www interface or send it to gmx-users-request at gromacs.org.
> 
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

> 
> > 
> -- 
> Yan Gao
> Jacobs School of Engineering
> University of California, San Diego
> Tel: 858-952-2308
> Email: Yan.Gao.2001 at gmail.com




> -- 
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search 
> before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php



More information about the gromacs.org_gmx-users mailing list