[gmx-users] mpi performance issues

David spoel at xray.bmc.uu.se
Mon Apr 19 23:03:01 CEST 2004


On Mon, 2004-04-19 at 22:47, istvan at kolossvary.hu wrote:
> Hi, 
> 
> I see there are a lot of threads on the mailing list on Gromacs mpi scaling. 
> I also tried to do this "out of the box" and got similar, horrible scaling 
> as others. I read through the archive and got some clues, although it seems 
> that scaling is extremely (molecular)system sensitive and I shouldn't expect 
> much with PME. However, looking at the 20-50% CPU usage on dual processor 
> nodes (3.06 GHz Xeons w/ 100Mbit Ethernet and Redhat 9.0) I was wondering if 
> the payload, which is the size of the packets sent between nodes is too 
> small. I believe Erik has mentioned this once on the list. Apparently the 
> payload size is 1000 bytes (plus 40 bytes TCP/IP overhead) on our system. It 
> would be much more efficient having larger payload, say 1400 bytes. My 
> question is whether this is something that can be set in Gromacs, or is this 
> an OS parameter. In either case, how can I change this? 

It is a feature of the MPI library. With LAM-MPI there is an option to
change this. Another option is to try the MPI-VIA implementation. Has
anyone tried that for GROMACS?

http://www.nersc.gov/research/FTG/via/

This should avoid the Linux TCP/IP stack, and have much lower latency on
cheap hardware. Volunteers please....


-- 
David.
________________________________________________________________________
David van der Spoel, PhD, Assist. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++




More information about the gromacs.org_gmx-users mailing list