[gmx-users] Gromacs parellal run:: getting difference in two trajectories
Carsten Kutzner
ckutzne at gwdg.de
Fri Sep 12 10:58:12 CEST 2008
vivek sharma wrote:
> HI Carsten,
> Thanks again for reply. and my apologies for putting question out of
> discussion.
> actually I tried same command with -np 24 and -np 64, and for both cases
> i got different trajectory (while analyzing them using ngmx).
If you look at a plot of your data, e.g. energies, they should slowly
diverge with time (start by looking at the first few hundred time steps).
This behaviour I would expect to be ok. Long-term averages should not be
affected, while the variables at a certain point in time will be
completely uncorrelated after a while.
> Also Can you suggest me some tutorial or reference to get details of
> scalability limitation of gromacs(on parellal enviournment).
There is a paper about gromacs scalability on Ethernet from which you
can draw some conclustions about the 3.3.x version. For higher processor
counts (np > 32) check out the new gromacs 4.0 paper.
- Speeding up parallel GROMACS on high-latency networks, 2007, JCC, Vol 28, 12
- GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular
Simulation, 2008, JCTC 4 (3)
Hope that helps,
Carsten
> With Thanks,
> Vivek
>
> 2008/9/12 Carsten Kutzner <ckutzne at gwdg.de <mailto:ckutzne at gwdg.de>>
>
> Hi Vivek,
>
> I think I'm a bit lost now. We were originally talking about differences
> in trajectories but from the mail you just sent I can see that you have
> a segmentation fault, which is another problem.
>
> I can only suggest that if you want to make use of 128 processors you
> should download the CVS version of gromacs or wait until the 4.0
> is out. Since in gromacs 3.3.x the protein has to reside as a whole
> on one of the processors, this very likely limits your scaling.
>
> Also, on 128 processors you will get a PME grid of 128x128xSomething
> (since nx and ny have to be divisible by the number of CPUs) which is
> probably way bigger than it needs to be (how big is it for a single
> CPU?). Together with a PME order of 6 this leads to a large overlap
> in the charge grid, which has to be communicated among the processors.
> PME order 4 will be better suited for such a high parallelization, but
> in general for Gromacs 3.x you should have at least a few thousand atoms
> per processor, less than 1000 won't give you decent scaling at all.
>
> Carsten
>
> vivek sharma wrote:
>
> Hi Carsten,
> Thanks for your reply. Actually I am running MD simulation on a
> protein molecule with 270 residues(2687 atoms), after adding
> water it is having 45599 atoms, and using the recent version of
> gromacs test available from gromacs.org <http://gromacs.org>
> <http://gromacs.org> (gmxtest-3.3.3.tgz)
>
> Following are the entries from the .mdp file I am using.....
>
> **********md.mdp
> title = trp_drg MD
> cpp = /lib/cpp ; location of cpp on SGI
> constraints = all-bonds
> integrator = md
> dt = 0.002 ; ps !
> nsteps = 25000 ; total 50 ps.
> nstcomm = 1
> nstxout = 2500 ; output coordinates every 5.0 ps
> nstvout = 0
> nstfout = 0
> nstlist = 5
> ns_type = grid
> rlist = 0.9
> coulombtype = PME
> rcoulomb = 0.9
> rvdw = 1.4
> fourierspacing = 0.12
> fourier_nx = 0
> fourier_ny = 0
> fourier_nz = 0
> pme_order = 6
> ewald_rtol = 1e-5
> optimize_fft = yes
> ; Berendsen temperature coupling is on in four groups
> Tcoupl = berendsen
> tau_t = 0.1 0.1 0.1
> tc-grps = protein NDP sol
> ref_t = 300 300 300
> ; Pressure coupling is on
> Pcoupl = berendsen
> pcoupltype = isotropic
> tau_p = 0.5
> compressibility = 4.5e-5
> ref_p = 1.0
> ; Generate velocites is on at 300 K.
> gen_vel = yes
> gen_temp = 300.0
> gen_seed = 173529
>
>
> ****************and Following are the commands I am using
> grompp_d -np 128 -f md1.mdp -c 1XU9_A_b4em.gro -p 1XU9_A.top -o
> 1XU9_A_md1_np128.tpr
>
> submit
> mdrun_d
> /////arguement for mdrun_d
> -s 1XU9_A_md1_np128.tpr -o 1XU9_A_md1_np128.trr -c
> 1XU9_A_pmd1_np128.gro -g md_np128.log -e md_np128.edr -np 128
>
> ***********Following is the error I am getting
> Reading file 1XU9_A_md1_np128.tpr, VERSION 3.3.3 (double precision)
> starting mdrun 'CORTICOSTEROID 11-BETA-DEHYDROGENASE, ISOZYME 1'
> 25000 steps, 50.0 ps.
>
> srun: error: n141: task1: Segmentation fault
> srun: Terminating job
>
> ****************************************************************
> Is this information is helpfull in figuring out the problem.
> Please, advice
>
> With Thanks,
> Vivek
>
> 2008/9/11 Carsten Kutzner <ckutzne at gwdg.de
> <mailto:ckutzne at gwdg.de> <mailto:ckutzne at gwdg.de
> <mailto:ckutzne at gwdg.de>>>
>
>
> vivek sharma wrote:
>
> Hi There,
> I am running gromacs parellal version on cluster, with
> different
> -np options.
>
> Hi,
>
> which version of gromacs exactly are you using?
>
>
>
>
>
> On analyzing the 5 nsec trajectory using ngmx, I am finding
> difference in the trajectory of two similar runs (only thing
> varying in two runs in -np i.e 20 and 64 ), where mdp
> file and
> input files are same in two cases.
> I am wondering why I am getting this difference in two
> trajectories ?
> I am looking for the advice whether I did something wrong or
> what may be the probable reason for this difference.
>
>
> There are many reasons why a parallel run does not yield binary
> identical
> results to a run with another number of processors, even if you
> start from
> the same tpr file. If you use PME, then the FFTW could pick a
> slightly
> different algorithm (it will select the fastest for that
> number of
> processors.
> This feature you can turn off by passing
> --disable-fftw-measure to the
> gromacs configure script). But still you can get results that
> are not
> binary identical if you do FFTs on a varying number of CPUs.
> Also, for
> limited accuracy which is inherent to any computer, additions
> need not
> be associative, which can show up in parallel additions.
>
> Generally, if you run in double precision, these effects will
> be way
> smaller,
> but nevertheless you won't get binary identical results. This
> will
> in all
> cases lead to trajectories which slowly diverge from each other.
> However,
> in the fist few hundred time steps, you should not see any
> difference in
> the first couple of decimals of the variables (positions,
> velocities,
> energies ...)
>
>
> Also, I am not able to run gromacs faster by increasing
> the -np
> issue,
>
>
> Please provide the exact command line you used.
>
>
> Is there any max limit for scaling gromacs on parellal
> cluster ?
>
>
> Yes, depending on your MD system and on the cluster you use :)
>
> Carsten
>
>
>
> With Thanks,
> Vivek
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> <mailto:gmx-users at gromacs.org>
> <mailto:gmx-users at gromacs.org <mailto:gmx-users at gromacs.org>>
>
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search
> before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>
> <mailto:gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>>.
>
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
>
>
> -- Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics Department
> Am Fassberg 11
> 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/research/dep/grubmueller/
> http://www.gwdg.de/~ckutzne <http://www.gwdg.de/%7Eckutzne>
> <http://www.gwdg.de/%7Eckutzne>
>
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> <mailto:gmx-users at gromacs.org>
> <mailto:gmx-users at gromacs.org <mailto:gmx-users at gromacs.org>>
>
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before
> posting!
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>
> <mailto:gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>>.
>
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> <mailto:gmx-users at gromacs.org>
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search
> before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics Department
> Am Fassberg 11
> 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/research/dep/grubmueller/
> http://www.gwdg.de/~ckutzne <http://www.gwdg.de/%7Eckutzne>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> <mailto:gmx-users at gromacs.org>
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before
> posting!
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
More information about the gromacs.org_gmx-users
mailing list