[gmx-developers] How to distribute charges over parallel nodes
Igor Leontyev
ileontyev at ucdavis.edu
Wed May 4 10:32:15 CEST 2011
To make partial charges be adjustable according to acting field I have
introduced modifications to gromacs 4.0.7. The serial (single thread)
version seems to be ready and I want to implement parallelization (with
particle decomposition). In my current implementation:
- values of mdatoms->chargeA for local atoms are updated in "do_md" at the
begininig of each timestep;
- 'MPI_Sendrecv' + 'gmx_wait' are used in "do_force" (right after the call
"move_cgcm") to distribute the new charges over parallel nodes.
After this the array mdatoms->chargeA have updated values on all nodes. But
some problem arises later in "gmx_pme_do" (modification free routine)
hanging up execution and even PC.
Is it possible that source of the problem is in use of ('MPI_Sendrecv' +
'gmx_wait') in wrong place of the code?
Many communications are performed in "gmx_pme_do", e.g. "pmeredist" calls
'MPI_Alltoallv' for charge and coordinate redistribution over the nodes.
Is there a particular reason in gromacs code why some communications are
done by 'MPI_Sendrecv' but other by 'MPI_Alltoallv'? What is the right way
(or right MPI routine) to distribute the locally updated charges over all
nodes?
Thank you,
Igor Leontyev
More information about the gromacs.org_gmx-developers
mailing list