[gmx-users] GROMACS on double processor nodes
David van der Spoel
spoel at xray.bmc.uu.se
Thu Feb 3 07:22:53 CET 2005
On Thu, 2005-02-03 at 15:48 +0100, Pietro Lopriore wrote:
> Our Environment:
> JS20 BladeCenter Power970 with Linux Suse 9 kernel 2.6.5-7.97-pseries64
> (2CPUs per node)
> Myrinet, GM 2.0.16-2,
> MPICH-GM 1.2.6.14,
> FFTW 2.1.5
> gcc 3.3.3-43.24
> gcc-64bit-9-200407011606
>
> Our GROMACS installation seems to be unable to perform well when our MPI
> machine file includes all the processors in each node (node001:2,
> node002:2, node003:2, etc.). We see two mdrun processes on each node, which
> seems quite strange.
Maybe you want to change the communication order such that the internode
communication is over Myrinet as well. Don't know how to do that, check
MPICH documentation.
> In fact the performances are very very poor (a 250 steps calculation takes
> long time in both double and single precision) if compared to a run with
> only 1 processor per node.
That depends a lot on what kind of job that is.
> Is there somthing we're missing?
Is MPICH compiled with SMP support?
> Thanks a lot in advance!!
>
> Pietro
>
> _________________________________________________
>
>
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
--
David.
________________________________________________________________________
David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596, 75124 Uppsala, Sweden
phone: 46 18 471 4205 fax: 46 18 511 755
spoel at xray.bmc.uu.se spoel at gromacs.org http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
More information about the gromacs.org_gmx-users
mailing list