[gmx-developers] alternate parallelization in src/gmxlib/network.c ?

Berk Hess hessb at mpip-mainz.mpg.de
Tue Jun 14 09:12:05 CEST 2005


Nathan Moore wrote:

>I'm presently working to port Gromacs to the IBM Blue Gene architecture. 
>The parallel benchmark on the GROMACS website (dpcc) scales only to about
>30-40 processors, and seems to slow down at sizes larger than 128 nodes. 
>I'm hoping to extend scalability to at least 512 processors and thought a
>first step might be to try a different message passing scheme (tuned
>versions of All to All are available for the architecture and the message
>passing intereconnect is quite fast).  From reading the manual this change
>seems easy to implement in the communicate_r function but much to my
>embarrassment I seem to be completely incapable of finding the function
>definition in the source.  Would anyone care to share where this function
>is defined?  grep "communicate_r" * */* */*/* doesn't return anything from
>the top of the source tree...  I assume there's a similar "communicate_f"
>that would need to be revised as well.
>
>Also, if this sort of message passing routine has been implemented in
>previous source revisions, I'd love to read about it.
>
>  
>
All the important communication during a simulation (except for PME)
is done in src/gmxlib/network.c and src/gmxlib/mvxvf.c.
It seems like all of my communication test code has disappeared,
except for the gmx_sumd routine in network.c.
You can have a look at this and easily copy it to the other two
summation routines in network.c (gmx_sumf and gmx_sumi).
The only thing left then is to distribute x, v and f in mvxvf.c
without the ring. Also this should be pretty straightforward.

But I guess the scaling on 512 processors will not be very good
without domain decomposition.

Berk.




More information about the gromacs.org_gmx-developers mailing list