[gmx-users] problem with MPI version under OSX

Dean Johnson dtj at uberh4x0r.org
Fri Oct 31 19:48:01 CET 2003


On Friday, October 31, 2003, at 11:58  AM, Erik Lindahl wrote:

> That sounds like a problem with MPI and not Gromacs (unless you had 
> other things running that affected load balancing).
>
> Since Gromacs combined with LAM-MPI works well on x86 and other 
> platforms I don't think there is much that can be done in Gromacs - 
> have you checked the LAM-MPI site/mailing-list?

I agree that its likely not a Gromacs issue. It appears to be some ugly 
system issue ("blah blah POSIX blah blah Semaphores blah blah"). I will 
confirm that once I get Amber built and run my benchmarks of that. Its 
kinda funny because we do see a distinctive perf advantage running 16x1 
as compared to 8x2, on Intel boxes. I wonder if the OSX problem is a 
really pathological version of that. I'm getting pretty darn good 
numbers tho. Adding the two more G5's tonite will be the big scaling 
test.

Okay, another subject. I am doing handwaving in predicting performance 
to larger node counts. Yes, its horribly dangerous, but I gotta do it. 
 From our testing, on our model, we see that myrinet is worth between a 
68% and 86% performance boost (over gigE) for Gromacs on x86 boxes. We 
are likely going to go with Infiniband on whatever boxes we decide on 
(G5 vs I2 vs Opteron). Does anybody have a feel for the relative feel 
for performance of Myrinet vs Infiniband, or gigE vs Infiniband, with 
Gromacs?

	-Dean




More information about the gromacs.org_gmx-users mailing list