[gmx-users] SMP vs Infiniband
Mostyn Lewis
Mostyn.Lewis at sun.com
Thu Feb 26 21:41:02 CET 2004
Denis,
Not really counter intuitive. It points to the possible fact
that 2 processes on one machine can outstrip the memory bandwidth
of one machine.
Regards,
Mostyn
On Thu, 26 Feb 2004, Tranchemontagne, Denis wrote:
> Hello
>
> I just got done running the gmxbench mark on a small cluster and am
> noticing some odd results. I am currently only running d.villin
>
> I am using a version of mpich provided by our infiniband supplier, not
> sure how it was built. fftw is 2.1.5 with --enable-mpi and
> --enable-float options.
> gromacs 3.2 compiled with --enable-mpi and --program-suffix=_mpi.
>
> The systems are dual 2.4 GHz Xeons running rh 7.3 linux with
> 2.4.18-27.7.xsmp as the kernel.
>
> When I run on as 2 processes on the smp machine I see a about 75%
> improvement, however if I run as 2 processes each on a different machine
> over infiniband I see 85% improvement.
>
> This seems counter intuitive.
> My typical run is grompp -np 2 -v ; mpirun -np -hostfile host mdrun_mpi
> -v
>
> I also tried mdrun_mpi -v -shuffle.
>
> Any help or suggestions would be appreciated.
>
> Denis
>
More information about the gromacs.org_gmx-users
mailing list