[gmx-users] GROMACS with MPI/GAMMA

Tony Ladd ladd at che.ufl.edu
Thu Dec 7 16:27:56 CET 2006


I sent the previous message before including comments:

The Opteron 275 is 10-25% faster than the P4D for GROMACS. However the
Opteron loses some of its edge when running dual threads. MPI/GAMMA
outperforms both LAM and OpenMPI by a significant margin on DPPC and Villin
benchmarks. With DPPC MPI/GAMMA scales as well as Infiniband but for Villin
it is about 75% of the Infiniband performance. The reduced latency and more
efficient flow control with the GAMMA protocol makes a significant
difference to the scalability of Gigabit ethernet. In general the Intel NICS
with GAMMA perform better than proprietary RDMA NICS from Ammasso and Level
5 networks.

I had to make a small change to Gromacs 3.3 to get MPI/GAMMA to run. In
futil.c line 102 the code tries to close a NULL pointer which strictly
speaking is illegal. This causes GAMMA to hang. It seems harmless to comment
it out.

More details can be found at
http://ladd.che.ufl.edu/research/beoclus/beoclus.htm
The GAMMA website is http://www.disi.unige.it/project/gamma/mpigamma

Tony


-------------------------------
Tony Ladd
Chemical Engineering
University of Florida
PO Box 116005
Gainesville, FL 32611-6005

Tel: 352-392-6509
FAX: 352-392-9513
Email: tladd at che.ufl.edu
Web: http://ladd.che.ufl.edu 




More information about the gromacs.org_gmx-users mailing list