[gmx-users] Parallel pulling with Gromacs 4.0.7: COMM mode problem

Aykut Erbas aerbas at ph.tum.de
Wed Mar 31 12:53:16 CEST 2010


Berk Hess wrote:
> Hi,
>
> I am not aware of any issues with parallel pulling in 4.0.7.
>
> Did you see this note in your log file:
> comm-mode angular will give incorrect results when the comm group
> partially crosses a periodic boundary
>
> If your system is periodic, you should not use comm mode angular.
>
> Berk
Hi
Should I not use comm mode angular for parallel running or never no 
matter it is parallel or not?
But that is what I have been using (for single machine) for many 
simulations without any problem.
I saw the warning but it looks like there is no crossing between the 
init_grp0 and the periodic box
and this warning appears only for parallel runs not for single machine 
cases.

thanks

Aykut
>
> > Date: Wed, 31 Mar 2010 12:32:43 +0200
> > From: aerbas at ph.tum.de
> > To: gmx-users at gromacs.org
> > Subject: Re: [gmx-users] Parallel pulling with Gromacs 4.0.7: COMM 
> mode problem
> >
> > Hi everybody
> >
> > There is still a problem about pulling code running in parallel. For
> > COMM_GRP active, you have the positions (pullx.xvg) and forces
> > (pullf.xvg) relative to the absolute coordinates instead of your
> > reference group.
> >
> > comm_mode=angular
> > comm_grps= surface
> >
> > and for pulling part
> > init_grps=surface
> >
> > should give you the coordinates and the forces with respect to the
> > surface as expected. However, in parallel running that is not the case
> > as can be seen from the pull output files given below.
> >
> > On single machine, everything works well.
> > At the moment, it seems the best thing to do is to calculate forces
> > from positions by counting the motion of the reference group.
> >
> > thanks and
> > bests
> >
> > > Dear Aykut:
> > >
> > > 1. Did you see the log file message:
> > >
> > > "comm-mode angular will give incorrect results when the comm group
> > > partially crosses a periodic boundary"
> > indeed, I saw this. But the surface which is roughly 5nm, is approx.
> > 0.5 nm away from the box. There is no way of any crossing.
> > And for G3 and G4 on single machine, you do not have such a warning
> > >
> > > 2. You say "Actually you might be right about the domain
> > > decomposition", but it seems like you didn't run it on gmx 4 in 
> serial
> > > or with particle decomposition.
> > >
> > very very sorry about this, I forgot to append that log for single
> > machine with G4
> >
> >
> > log file for G4 on single machine
> > *******************************************
> > Enabling SPC water optimization for 3021 molecules.
> >
> > Configuring nonbonded kernels...
> > Testing x86_64 SSE2 support... present.
> >
> >
> > Removing pbc first time
> >
> > Will apply umbrella COM pulling in geometry 'position'
> > between a reference group and 1 group
> > Pull group 0: 5181 atoms, mass 56947.551
> > Pull group 1: 13 atoms, mass 116.120
> >
> > Initializing LINear Constraint Solver
> >
> >
> > -------- -------- --- Thank You --- -------- --------
> >
> > Center of mass motion removal mode is Angular
> > We have the following groups for center of mass motion removal:
> > 0: DIAM
> >
> > There are: 14359 Atoms
> > Max number of connections per atom is 94
> > Total number of connections is 403131
> > Max number of graph edges per atom is 4
> > Total number of graph edges is 30690
> >
> > Constraining the starting coordinates (step 0)
> >
> > Constraining the coordinates at t0-dt (step 0)
> > RMS relative constraint deviation after constraining: 2.35e-07
> > Initial temperature: 300.447 K
> >
> >
> > > I wish you the best of luck, I'm out of ideas here.
> > >
> > thanks anyways
> > > Chris.
> > >
> > > -- original message --
> > >
> > > Hi
> > >
> > > Actually you might be right about the domain decomposition
> > >
> > >
> > > G3 pull.pdo output file on single machine
> > >
> > > focus on the 2nd and 3rd columns which are x and y positions of the
> > > surface: almost *unchanged* as expected for COMM_grps=surface option
> > > *************
> > > 20000.000000 3.149521 1.576811 5.770928
> > > 7.149521 1.874820 1.676811
> > > 20000.201172 3.149521 1.576812 5.761463
> > > 7.149541 1.880746 1.676812
> > > 20000.400391 3.149520 1.576813 5.771702
> > > 7.149560 1.867692 1.676813
> > > 20000.601562 3.149519 1.576813 5.797871
> > > 7.149579 1.879650 1.676813
> > > 20000.800781 3.149518 1.576812 5.794115
> > > 7.149598 1.887728 1.676812
> > > 20001.000000 3.149517 1.576813 5.778761
> > > 7.149617 1.870823 1.676813
> > > 20001.201172 3.149518 1.576815 5.783334
> > > 7.149638 1.849283 1.676815
> > > 20001.400391 3.149517 1.576815 5.780031
> > > 7.149658 1.877158 1.676815
> > > .....
> > > .....
> > > 39999.402344 3.149799 1.576911 2.249830
> > > 9.149739 1.604563 1.676911
> > > 39999.601562 3.149797 1.576911 2.209385
> > > 9.149757 1.622380 1.676911
> > > 39999.800781 3.149792 1.576911 2.215503
> > > 9.149773 1.653246 1.676911
> > > 40000.000000 3.149791 1.576912 2.221903
> > > 9.149791 1.659781 1.676912
> > >
> > >
> > >
> > > G4 pull.xvg output (in parellel), 2nd and 3rd columns which are x 
> and y
> > > positions of the surface: *changing*, contradiction to 
> COMM_grps=surface
> > > option
> > >
> > > *********
> > > 0.4000 3.1498 2.997 -0.391131 -0.331925
> > > 0.8000 3.14903 2.99499 -0.391976 -0.346309
> > > 1.2000 3.14753 2.99846 -0.372158 -0.407621
> > > 1.6000 3.14635 3.00695 -0.337084 -0.422437
> > > 2.0000 3.14465 3.00585 -0.306999 -0.474991
> > > 2.4000 3.14365 3.00408 -0.30164 -0.48047
> > > 2.8000 3.14338 3.00447 -0.285076 -0.483861
> > > 3.2000 3.14361 3.00119 -0.226717 -0.460955
> > > ........
> > > ..........
> > > 2838.0000 3.20024 0.662325 1.7185 0.986139
> > > 2838.4000 3.19435 0.661913 1.74023 1.0404
> > > 2838.8000 3.18835 0.666171 1.8073 1.02766
> > > 2839.2000 3.18264 0.658261 1.81687 0.999429
> > > 2839.6000 3.17766 0.668439 1.82782 1.05693
> > >
> > >
> > > here is the log file for G4 (pulling) run in parallel
> > >
> > > ********************************
> > > Initializing Domain Decomposition on 32 nodes
> > > Dynamic load balancing: auto
> > > Will sort the charge groups at every domain (re)decomposition
> > > Initial maximum inter charge-group distances:
> > > two-body bonded interactions: 0.507 nm, LJ-14, atoms 5186 5197
> > > multi-body bonded interactions: 0.507 nm, Proper Dih., atoms 5186 5197
> > > Minimum cell size due to bonded interactions: 0.557 nm
> > > Maximum distance for 5 constraints, at 120 deg. angles, all-trans:
> > > 0.200 nm
> > > Estimated maximum distance required for P-LINCS: 0.200 nm
> > > Guess for relative PME load: 0.20
> > > Will use 24 particle-particle and 8 PME only nodes
> > > This is a guess, check the performance at the end of the log file
> > > Using 8 separate PME nodes
> > > Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> > > Optimizing the DD grid for 24 cells with a minimum initial size of
> > > 0.697 nm
> > > The maximum allowed number of cells is: X 9 Y 4 Z 9
> > > Domain decomposition grid 4 x 2 x 3, separate PME nodes 8
> > >
> > > comm-mode angular will give incorrect results when the comm group
> > > partially crosses a periodic boundary
> > > Interleaving PP and PME nodes
> > > This is a particle-particle only node
> > >
> > > Domain decomposition nodeid 0, coordinates 0 0 0
> > >
> > > Table routines are used for coulomb: TRUE
> > > Table routines are used for vdw: FALSE
> > > Will do PME sum in reciprocal space.
> > >
> > > -------- -------- --- Thank You --- -------- --------
> > > Using a Gaussian width (1/beta) of 0.25613 nm for Ewald
> > > Cut-off's: NS: 0.8 Coulomb: 0.8 LJ: 0.8
> > > System total charge: -0.000
> > > Generated table with 3600 data points for Ewald.
> > > Tabscale = 2000 points/nm
> > > Generated table with 3600 data points for LJ6.
> > > Tabscale = 2000 points/nm
> > > Generated table with 3600 data points for LJ12.
> > > Tabscale = 2000 points/nm
> > > Generated table with 3600 data points for 1-4 COUL.
> > > Tabscale = 2000 points/nm
> > > Generated table with 3600 data points for 1-4 LJ6.
> > > Tabscale = 2000 points/nm
> > > Generated table with 3600 data points for 1-4 LJ12.
> > > Tabscale = 2000 points/nm
> > >
> > > Enabling SPC water optimization for 3021 molecules.
> > >
> > > Configuring nonbonded kernels...
> > >
> > >
> > > Removing pbc first time
> > >
> > > Will apply umbrella COM pulling in geometry 'position'
> > > between a reference group and 1 group
> > > Pull group 0: 5181 atoms, mass 56947.551
> > > Pull group 1: 13 atoms, mass 116.120
> > >
> > > Initializing Parallel LINear Constraint Solver
> > >
> > >
> > >
> > > Linking all bonded interactions to atoms
> > > There are 85833 inter charge-group exclusions,
> > > will use an extra communication step for exclusion forces for PME
> > >
> > > The initial number of communication pulses is: X 1 Y 1 Z 1
> > > The initial domain decomposition cell size is: X 1.58 nm Y 1.58 nm Z
> > > 2.23 nm
> > >
> > > The maximum allowed distance for charge groups involved in
> > > interactions is:
> > > non-bonded interactions 0.800 nm
> > > (the following are initial values, they could change due to box
> > > deformation)
> > > two-body bonded interactions (-rdd) 0.800 nm
> > > multi-body bonded interactions (-rdd) 0.800 nm
> > > atoms separated by up to 5 constraints (-rcon) 1.575 nm
> > >
> > > When dynamic load balancing gets turned on, these settings will 
> change
> > > to:
> > > The maximum number of communication pulses is: X 1 Y 1 Z 1
> > > The minimum size for domain decomposition cells is 0.800 nm
> > > The requested allowed shrink of DD cells (option -dds) is: 0.80
> > > The allowed shrink of domain decomposition cells is: X 0.51 Y 0.51 
> Z 0.36
> > > The maximum allowed distance for charge groups involved in
> > > interactions is:
> > > non-bonded interactions 0.800 nm
> > > two-body bonded interactions (-rdd) 0.800 nm
> > > multi-body bonded interactions (-rdd) 0.800 nm
> > > atoms separated by up to 5 constraints (-rcon) 0.800 nm
> > >
> > >
> > > Making 3D domain decomposition grid 4 x 2 x 3, home cell index 0 0 0
> > >
> > > Center of mass motion removal mode is Angular
> > > We have the following groups for center of mass motion removal:
> > > 0: DIAM
> > >
> >
> > --
> > gmx-users mailing list gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before 
> posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> ------------------------------------------------------------------------
> New Windows 7: Find the right PC for you. Learn more. 
> <http://windows.microsoft.com/shop>




More information about the gromacs.org_gmx-users mailing list