[gmx-users] running GROMACS-MPI on Rocks cluster- strange results
Diego Enry Gomes
diego.enry at gmail.com
Fri Mar 6 10:55:24 CET 2009
These results are not strange.
Performance results really depend on the size/setup of your system.
Next time use gmxbench so we can have a better reference.
Are you using gromacs 4.0.4 ? It scales much better than gromacs-3.x.x
versions. Anyway, this very bad scaling is normal over gigabit
ethernet even with cat6 cables and some tricks like using two network
interfaces per node. (also two switches or with two VPNs)
If you can't afford an infiniband interfaces there is a solution. You
can try installing the GAMMA drivers for your ethernet interface and
MPI/GAMMA as MPI. However I'm not quite sure if GAMMA is stable. Also
you will need two network interfaces, one of them must be Intel.
On Mar 6, 2009, at 12:34 AM, kala wrote:
> i have ran a particular MD of protein in water for 100ps (50,000
> steps) in a variety of combinations
> 1. on a single processor( non-mpi) intel core2duo 2.2Ghz 2 GB ram
> time taken : 1hr 30 min
> 2. 2 processors on a single machine (MPI) similar specs time taken :
> 35 min
> 3 .2 processors on different machines (mpi) similar specs time
> taken : 1hr 5 min
> 4 .5 processors on 5 different machines (mpi) similar specs time
> taken: 38 min
> 5.10 processors on 5 different machines(mpi) similar specs time
> taken: 45 min
> for 1- open discovery ( fedora 9)
> for 2-5 Rocks cluster 5.1
> network connectivity Gigabit on cat6
> Are the above specified time intervals normal or i am making big big
> comments are invited.
> kala bharath
> gmx-users mailing list gmx-users at gromacs.org
> Please search the archive at http://www.gromacs.org/search before
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
More information about the gromacs.org_gmx-users