[gmx-users] large sim box
Yan Gao
y1gao at ucsd.edu
Wed May 26 01:18:10 CEST 2010
Hi Mark,
Thank you for your comments! They are very helpful.
I still have several questions regarding your comments:
1. Which constraints should I apply for 2fs stepsize? hbonds, all-bonds,
h-angles, all-angles
I am simulating molecules that contain carbon nanotubes and polymer chains.
May I have your suggestions based on your experience? Thanks.
2.If I understand correctly, it is enough to use the default value for
xtc-precision = 1000.
I am not quite understand about this. Does 1000 means that it is calculated
with precision of 0.001 nm? and 1000000 for 1e-6 nm? Thanks.
3. Regarding the MPI, may I have your suggestions on any illustrative
examples of parallelization with MPI for gromacs?
How can I know if the cluster support MPI? The cluster I am using have
linux system.
It seems from the FAQ that I need to re-configure the Gromacs to enable
mpi. Do I need additional software/program to support the parallelization?
Can I implement the parallelization just like I implement the single
processor task? or with several more bash commands? Thanks.
Thank you very much for your time and help!
Young
On Mon, May 24, 2010 at 1:02 PM, Mark Abraham <mark.abraham at anu.edu.au>wrote:
> ----- Original Message -----
> From: Yan Gao <y1gao at ucsd.edu>
> Date: Tuesday, May 25, 2010 3:02
> Subject: [gmx-users] large sim box
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>
> > Hi There,
> >
> > I want to use a large simulation box. I did a trial
> with 15 * 15 * 15 nm box for 100 steps. genbox_d generates 110k water
> molecules, or 330k atoms.
> >
> > It looks like that gromacs can run that
> large number of atoms. I am sure it will take a long long long time.
> However if I really want to simulate it, is there any way that I can
> increase the speed? (except using a better cpu, or paralleling it)
> Thanks.
>
> You can control the cost through choice of algorithm and implementation.
> That means you need to learn how they work and whether some trade-offs are
> suitable for you. That's going to mean lots of reading, and some
> experimentation on more tractable systems. Learn to walk before you try to
> run! However, the only serious way to approach a system this large is with
> parallelization. Also, reconsider your use of double precision.
>
> > My second question is that: If I have to use clusters or super
> computer, which one is better? and, do I need a particular
> software/program to paralleling it? Thanks.
>
> GROMACS does parallelization using MPI, which will be available on any
> machine you can find. There are platforms for which GROMACS does not have
> the specially-optimized non-bonded inner loops - avoid such platforms if you
> have the choice. You should read the 2008 GROMACS JCTC paper.
>
> >
> > I put my .mdp below:
> > integrator = md
> > dt
> = 0.002
> > ; duration 2000 ps
> > nsteps = 100
> > comm_mode
> = linear
> > nstcomm = 1
> > ; dump config every 300 fs
> >
> nstxout = 10
> > nstvout = 10
> > nstfout
> = 10
>
> Writing output of all of energies, forces and velocities this often is a
> waste of time in production simulations. Adjacent data points 10 MD steps
> apart will be strongly correlated, even if you plan to use the force and/or
> velocity data. Consider the needs of your analysis, and probably plan to use
> nstxtcout instead of any of these.
>
> > nstcheckpoint = 100
> > nstlog
> = 10
> > nstenergy = 10
> > nstxtcout = 10
> >
> xtc-precision = 1000000
>
> Read what this does.
>
> > nstlist = 1
> > ns_type
> = grid
> > pbc = xyz
> > rlist =
> 1.0 ;1.0
> > coulombtype = PME
> > rcoulomb
> = 1.0 ;1.0
> >
> fourierspacing = 0.2 ;0.1
>
> That will noticeably reduce the cost of PME, but its effect on accuracy is
> not well known.
>
> > pme_order = 4
> > ewald_rtol
> = 1e-5
> > optimize_fft = yes
> > vdwtype =
> cut-off
> > rvdw = 1.0 ;1.0
> > tcoupl
> = Nose-Hoover
> >
> tc_grps = system
>
> This is often a poor choice. grompp probably told you that.
>
> > tau_t = 0.5
> > ref_t
> = 300.0
> > Pcoupl = no
> > annealing = no
> > gen_vel
> = no
> > gen_temp = 300.0
> >
> gen_seed = 173529
> > constraints = none
>
> You must use constraints if you wish a 2fs timestep.
>
> > ;energy_excl
> = C_H C_H
> > constraint_algorithm = lincs
> > unconstrained_start
> = no
> > lincs_order = 4
> > lincs_iter = 1
>
> Mark
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
--
Yan Gao
Jacobs School of Engineering
University of California, San Diego
Tel: 858-952-2308
Email: Yan.Gao.2001 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100525/a61859b3/attachment.html>
More information about the gromacs.org_gmx-users
mailing list