[gmx-users] SMP vs Infiniband

David spoel at xray.bmc.uu.se
Thu Feb 26 22:10:01 CET 2004


On Thu, 2004-02-26 at 20:09, Tranchemontagne, Denis wrote:
> Hello
>  
> I just got done running the gmxbench mark on a small cluster and am
> noticing some odd results. I am currently only running d.villin
>  
> I am using a version of mpich provided by our infiniband supplier, not
> sure how it was built. fftw is 2.1.5 with --enable-mpi and
> --enable-float options.
> gromacs 3.2 compiled with --enable-mpi and --program-suffix=_mpi.  
>  
> The systems are dual 2.4 GHz Xeons running rh 7.3 linux with
> 2.4.18-27.7.xsmp as the kernel.
>  
> When I run on as 2 processes on the smp machine I see a about 75%
> improvement, however if I run as 2 processes each on a different
> machine over infiniband I see 85% improvement.  
>  
> This seems counter intuitive.

This means you have very good communication between machines... It also
means that the SMP communication is not optimal. For serious benchmarks
you'll have to use a larger system...

> My typical run is grompp -np 2 -v ; mpirun -np -hostfile host
> mdrun_mpi -v
>  
> I also tried mdrun_mpi -v -shuffle.
>  
> Any help or suggestions would be appreciated.
>  
> Denis
-- 
David.
________________________________________________________________________
David van der Spoel, PhD, Assist. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++




More information about the gromacs.org_gmx-users mailing list