[gmx-users] Parallel pulling with Gromacs 4.0.7: COMM mode problem

Aykut Erbas aerbas at ph.tum.de
Tue Mar 30 15:45:32 CEST 2010


Hi

Actually you might be right about the domain decomposition


G3 pull.pdo output file on single machine

focus on the 2nd and 3rd columns which are x and y positions of the 
surface: almost *unchanged* as expected for COMM_grps=surface option
*************
20000.000000    3.149521        1.576811        5.770928        
7.149521        1.874820        1.676811
20000.201172    3.149521        1.576812        5.761463        
7.149541        1.880746        1.676812
20000.400391    3.149520        1.576813        5.771702        
7.149560        1.867692        1.676813
20000.601562    3.149519        1.576813        5.797871        
7.149579        1.879650        1.676813
20000.800781    3.149518        1.576812        5.794115        
7.149598        1.887728        1.676812
20001.000000    3.149517        1.576813        5.778761        
7.149617        1.870823        1.676813
20001.201172    3.149518        1.576815        5.783334        
7.149638        1.849283        1.676815
20001.400391    3.149517        1.576815        5.780031        
7.149658        1.877158        1.676815
.....
.....
39999.402344    3.149799        1.576911        2.249830        
9.149739        1.604563        1.676911
39999.601562    3.149797        1.576911        2.209385        
9.149757        1.622380        1.676911
39999.800781    3.149792        1.576911        2.215503        
9.149773        1.653246        1.676911
40000.000000    3.149791        1.576912        2.221903        
9.149791        1.659781        1.676912



G4 pull.xvg output (in parellel), 2nd and 3rd columns which are x and y 
positions of the surface: *changing*, contradiction to COMM_grps=surface 
option

*********
0.4000     3.1498  2.997      -0.391131          -0.331925
0.8000     3.14903 2.99499 -0.391976       -0.346309
1.2000     3.14753 2.99846 -0.372158       -0.407621
1.6000     3.14635 3.00695 -0.337084       -0.422437
2.0000     3.14465 3.00585 -0.306999       -0.474991
2.4000     3.14365 3.00408 -0.30164        -0.48047
2.8000     3.14338 3.00447 -0.285076       -0.483861
3.2000     3.14361 3.00119 -0.226717       -0.460955
........
..........
2838.0000       3.20024 0.662325        1.7185  0.986139
2838.4000       3.19435 0.661913        1.74023 1.0404
2838.8000       3.18835 0.666171        1.8073  1.02766
2839.2000       3.18264 0.658261        1.81687 0.999429
2839.6000       3.17766 0.668439        1.82782 1.05693


here is the log file for G4 (pulling) run in parallel

********************************
Initializing Domain Decomposition on 32 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
    two-body bonded interactions: 0.507 nm, LJ-14, atoms 5186 5197
  multi-body bonded interactions: 0.507 nm, Proper Dih., atoms 5186 5197
Minimum cell size due to bonded interactions: 0.557 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.200 nm
Estimated maximum distance required for P-LINCS: 0.200 nm
Guess for relative PME load: 0.20
Will use 24 particle-particle and 8 PME only nodes
This is a guess, check the performance at the end of the log file
Using 8 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 24 cells with a minimum initial size of 0.697 nm
The maximum allowed number of cells is: X 9 Y 4 Z 9
Domain decomposition grid 4 x 2 x 3, separate PME nodes 8

comm-mode angular will give incorrect results when the comm group 
partially crosses a periodic boundary
Interleaving PP and PME nodes
This is a particle-particle only node

Domain decomposition nodeid 0, coordinates 0 0 0

Table routines are used for coulomb: TRUE
Table routines are used for vdw:     FALSE
Will do PME sum in reciprocal space.

-------- -------- --- Thank You --- -------- --------
Using a Gaussian width (1/beta) of 0.25613 nm for Ewald
Cut-off's:   NS: 0.8   Coulomb: 0.8   LJ: 0.8
System total charge: -0.000
Generated table with 3600 data points for Ewald.
Tabscale = 2000 points/nm
Generated table with 3600 data points for LJ6.
Tabscale = 2000 points/nm
Generated table with 3600 data points for LJ12.
Tabscale = 2000 points/nm
Generated table with 3600 data points for 1-4 COUL.
Tabscale = 2000 points/nm
Generated table with 3600 data points for 1-4 LJ6.
Tabscale = 2000 points/nm
Generated table with 3600 data points for 1-4 LJ12.
Tabscale = 2000 points/nm

Enabling SPC water optimization for 3021 molecules.

Configuring nonbonded kernels...


Removing pbc first time

Will apply umbrella COM pulling in geometry 'position'
between a reference group and 1 group
Pull group 0:  5181 atoms, mass 56947.551
Pull group 1:    13 atoms, mass   116.120

Initializing Parallel LINear Constraint Solver



Linking all bonded interactions to atoms
There are 85833 inter charge-group exclusions,
will use an extra communication step for exclusion forces for PME

The initial number of communication pulses is: X 1 Y 1 Z 1
The initial domain decomposition cell size is: X 1.58 nm Y 1.58 nm Z 2.23 nm

The maximum allowed distance for charge groups involved in interactions is:
                 non-bonded interactions           0.800 nm
(the following are initial values, they could change due to box deformation)
            two-body bonded interactions  (-rdd)   0.800 nm
          multi-body bonded interactions  (-rdd)   0.800 nm
  atoms separated by up to 5 constraints  (-rcon)  1.575 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1 Y 1 Z 1
The minimum size for domain decomposition cells is 0.800 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.51 Y 0.51 Z 0.36
The maximum allowed distance for charge groups involved in interactions is:
                 non-bonded interactions           0.800 nm
            two-body bonded interactions  (-rdd)   0.800 nm
          multi-body bonded interactions  (-rdd)   0.800 nm
  atoms separated by up to 5 constraints  (-rcon)  0.800 nm


Making 3D domain decomposition grid 4 x 2 x 3, home cell index 0 0 0

Center of mass motion removal mode is Angular
We have the following groups for center of mass motion removal:
  0:  DIAM


-------- -------- --- Thank You --- -------- --------



there is no G3 parallel simulations since it is desperately slow(er) ....


bests


Aykut


chris.neale at utoronto.ca wrote:
>> If you pull in G3 with AFM option, with your reference groups is the 
>> surface, in the output pull.pdo file what you will get is  solute 
>> (pulled group) coordinates /wrt the surface...
>
> Yes. Can we see the data that you get as output in each case and tell 
> us what the major difference is?
>
>> the coordinates of your reference groups as a function of time does 
>> not change, right...
>
> Not normally true. A pulling force will be applied to your surface in 
> addition to your solute. The only thing that might keep it static is:
>
>> Note that I have angular COMM mode for such simulation.
>> comm_mode                = angular
>> nstcomm                  = 1
>> comm_grps                = DIAM
>
> But who knows how and if this works in gromacs 4 with domain 
> decomposition. Please try gromacs 4 in serial and see if you get the 
> same unexpected results.
Actually if you read the very first mail you will see that this is the 
ongoing problem so far
>
>> that the output coordinates for my pulled groups should be /wrt the 
>> surface (DIAM) However, the situation is completely, how can I say, 
>> smtg else...
>
> Can you show some data for this?
>
>> however, it treats like iit_grps is 0 0 0 all the time but it is not
>
> Can you show some data for this?
>
>> In the log file, I can see that COMM grp is the surface and the pulling
>> is /wrt the surface again but the output gives smtg as COMM grp is 
>> the whole box
>
> Can you show some data for this?
>
> Also, please use entire words. smtg and /wrt, while quicker to type, 
> are actually harder and more annoying to read.
>
>




More information about the gromacs.org_gmx-users mailing list